Everywhere we travel nowadays we hear someone speaking about making their city a “smart city.” We put the faith in those creativities that have indomitable what they want “smart” to achieve whether it be zero net water consumption, becoming energy neutral for the public infrastructure, decline in commute times or educating services like refuse collection. Significant and supportive to assessable goals is a key landmark in a city’s journey to becoming “Smart.”
Another critical milestone is making sure that the required connectivity infrastructure, what we frequently call “the network” is up to the chore. A beneficial “smart city” involves that the city services linked and automated wherever possible via a secure, resilient, reliable network.
To keep this into context, let’s take into account one of my favorite quixotic functions of the smart city. Nowadays when you drive to a workplace, your vehicle stands for hours on the other end, doing nothing. But by fusing the concepts of the self-driving car with the smart city, that car could become a “shared resource,” being used by nearby businesses to courier packages, deliver food, act as a Careem, Uber… dramatically improving its utilization and making you some money in the process because hey, it’s your car and if it’s in use you get paid!
For this to work, you need the numerous cable properties to be able to relate with one another; from the use that lets the city know your car is available for use, to the trades that will leverage it during the day and the networks monitoring its whereabouts on your behalf. The system is the key, and it better be highly available, resilient, and secure.
If these standards addressed, the vision can never manifest. But that’s just one issue to consider. What has also arisen is an all-too-familiar challenge: siloed, proprietary networks deployed by various agencies and enterprises which are both often costly and don’t allow for sharing data among one another.
If we are to achieve real interoperability of message between agencies, we need to see some standards with well documented and stable APIs being developed and deployed as these networks built. If rules are in the right order, we will be able to distribute the data between different city groups and functions regardless of whether the systems owned or no.
With the variety of Internet of Things sensors transmitting all from a few bits/second for easy monitor points to that flooding the network with high-definition video, making sure we have bandwidth and capability across the city is an essential first step.
We will want to squeeze as much functionality out of the existing infrastructure as possible. We’ll want to identify and light unused dark fibers across a city. It also means overlaying existing 3G and 4G services with a 5G network and making sure these systems hit every corner of the city – there cannot be no-coverage areas. Ultimately it comes down to ensuring the underlying network is optimized.
It’s why the underlying network should be software-defined and as many of its system services virtualized as possible. That’s the best way to get service speed and scale. Network traffic on a Software Defined Network (SDN) architecture can shift from less critical services to where required the most – such as a significant local sporting event or an emergency which demands constant connectivity.
The rush to creating a smart city is comprehensible; it’s merely a more intelligent way for a region to perform. But it’s worth taking a step backward, checking at the fundamentals and evaluating if we understand what we are trying to come with “smart.” Making, first and foremost, that it can work together, that we have the required topology of connectivity, and that we have the capability available to keep it going. Otherwise, while the smart city of tomorrow might be quicker than most, we’ll always be left wondering if we went some intelligence in the tank.