DILIP VERMA, REGIONAL VP, INDIA // MARCH 26, 2018
Almost all cities are currently looking into or have already taken steps to become safe cities. A few have even begun making the transition to becoming smart cities. The next stage in this urban evolution is the so-called “cognitive” city. Even though the concept of cognitive cities is still in its infancy stage, at the end of our recent 3-part series, we identified two key ingredients: advanced data analytics of large volumes and multiple types of data, and an adaptability which drives resiliency and continuous improvement. Although the advent of truly cognitive cities is well into the future, when they arrive, we’ll be able to distinguish them from both safe and smart cities:
The key to a cognitive city is city-wide cognition of the underlying need, rather than the leveraging of a specific tech or platform. Meaning, to be able and rapidly adopt technologies and practices, thus giving the city a natural protection against vendor lock-in. Smart cities, by contrast, tend to approach ICT as just another utility, like water and sewage, electricity, or garbage collection, which are often operated as regulated monopolies, with long contract periods and expensive financial and legal hurdles in place that prevent adopting another service provider or utility.
Cognitive cities, therefore, are much better positioned to thrive in the face of significant challenges like clean water scarcity, climate change, and providing functional and efficient mass transportation at mega-city scales.
Cognitive cities hold the promise of being more resilient than other urban areas which haven’t made that transition.
In our next post, we’ll take a closer look at how a cognitive city of the future serves its citizens, examine new challenges, and offer an interim view or where we are currently in terms of cognitive cities becoming a reality.
DILIP VERMA, REGIONAL VP, INDIA // APRIL 09, 2018
In our previous post, we discussed that, although we don’t exactly know what a cognitive city will look like, we do, however, have a general idea of how it would behave: a cognitive city will be more resilient, adaptable, and efficient than both smart and safe cities, and would not only improve security, safety, and city operations but also to more generally improve the lives of the city’s residents.
In today’s post, we’re going to explain how the focus beyond the technology itself is key for cities to transition from being smart to becoming cognitive.
One of the reasons why a firmer concept of cognitive cities remains so elusive is because the idea of smart cities hasn’t been fully realized. In the opening installment of our recent three-post series on the transition of smart cities to cognitive ones, we hinted at one of the reasons why:
“There is one major caveat to smart city solutions: the data tends to flow in one direction from what are ultimately surveillance devices to government officials… leading to tensions between personal privacy and government goals of safety and higher efficiency”.
This tension is not the only root of a lot of the pushback and criticism of smart city initiatives, including very tech-savvy media outlets.
Here are two others:
Does social-media driven transition to cognitive cities serve ALL citizens? Even with wireless broadband connectivity becoming cheaper and faster, and sensor and processing technologies getting more affordable, it is likely to be limited to certain parts of the population, for at least some time. Consider this article from Wired which describes an ambitious smart city project, backed by Google, in Toronto as likely to only serve technologically-capable millennials, and ignore the other citizens such as the elderly, the disabled, and the poor.
Data doesn’t analyze itself: Another reason why smart cities have yet to widely appear is the fact that people with real data science, statistical, and programming skills are required for a city’s data to work for its people. Again, a very non-technical roadblock arises, as a recent article from the same publication points out, which is the challenge of hiring knowledgeable people who can actually “separate the data wheat from the data chaff”.
Cities trying to mature from safe to truly smart will encounter these and other constraining realities, but it’s just as important to remember that even cities which will have become cognitive will also encounter these same realities.
The advantage of a cognitive city lies in the fact that data collection won’t be limited to electronic sensors, information sharing will occur over more than just copper wires or strands of glass fiber, and decision making will be distributed over residents, civic groups, elected officials, and other stakeholders.
All of a city’s fluid, often competing, and overlapping constituencies and systems must sense, adapt, learn, and remember together. This is a key insight from our previous post: collective and individual responses to change and challenges become ingrained habits, a key mechanism behind memory formation in a cognitive city. Those habits – namely citizens’ interaction with ICT systems and each other – can become the means by which the next challenge is met.
Objective data can provide a lot of insight into the practical ramifications of any decision a city makes. Data combined with advanced predictive analytics can help us more intelligently allocate limited resources. Big data, cloud, social, IoT, and machine learning can help make a city smart, but much more is needed to make it wise.
The technologies for enabling this transition are already here. However, the maturity of a cognitive city depends on a lot more than employing the latest tech or popular platform to improve safety, city operations, and the general well-being of the city. More time and patience are needed. They are needed for such technologies to become available and used by more sectors of society. They are needed in order for community values to be taken into account when considering how and to what extent a city should make use of the technologies for a safer and smoother city life for all.
The technologies and practices to move the cognitive city from the conceptual stage to an actual being are progressing, as are the social processes described in this article. We are sure to keep a watchful eye and share our insights.
DILIP VERMA, REGIONAL VP, INDIA // APRIL 23, 2018
All over the world, safe city initiatives are popping up. Today’s abundance of captured data, connectivity, analytics, and computing power have made the once futuristic concept of a smart city an increasingly common reality. But perhaps it is the smart safe city that has been its biggest enabler as it has allowed us to apply, test and prove the basic principles of the concept.
As we’ve discussed in a previous blog post, a safe city is a core element of a smart city. There is no more fundamental and important public issue than security and safety. And it is with this understanding that India has approached its increasing global domination in this arena.
In June 2015, Prime Minister Narendra Modi announced the nation’s “100 Smart Cities Mission”. India’s government has approved a total of 15 billion US dollars towards the effort, which includes the development 100 smart cities. The initiative is true to the principals of a smart city, as funds are distributed by a competition-based method, wherein citizens are integral in the planning and interpretation of ‘smartness’.
An excellent example of this is from Nanded City, in the Maharashtra state. The city’s leadership determined that in order to monitor the entire city, as they wanted, they would need an innovative approach. And thus, the C-Cube project was conceived. The integrated command, control and communication center, is powered by Qognify’s Safe City solution that includes Situator, (PSIM/Situation Management solution), Video Management and Analytics.
It’s the innovative use of technology has made Nanded a benchmark smart city in India. With 24/7 monitoring of the city, situation and disaster management, predictive and prescriptive guidance, the city has experienced a significant improvement in safety, security, and operations.
In Kohlapur, another city in India, the need was more specific. A heavy influx of religious tourism created increasing safety and security issues. The city sought to mitigate the risk of that is associated with these types of spikes in the population. Through the use of Qognify’s Video Management and Video Analytics, law enforcement and city management have been able to monitor, manage and prevent unfolding events.
The Control Room at Kolhapur
While Kohlapur is an ancient city, Navi Mumbai is a new planned township, designed to handle the population overflow from Mumbai. Without any limiting restrictions posed by existing infrastructure, city planners and leaders were able to design a ground-up smart safe city solution with Qognify technology.
The solution monitors all the critical points within the city such as public transportation, schools, heavily traveled traffic junctions, city entrances and exits, open-air markets, and utility infrastructure and more. Additionally, by integrating third-party systems and sensors, the city has a complete operational view of everything that is taking place or potentially unfolding.
While India has embraced the smart and safe city concept as a nation, cities all over the world are doing tremendously innovative initiatives of their own. San Francisco named the Greenest City in the U.S. in 2011, declared a goal of achieving zero waste by 2020 and carbon-free by 2030. They intend to meet those objectives through a range of smart initiatives that include things like making building operations more efficient, reducing energy use, streamlining waste management systems, and improving transportation systems.
Chicago has declared it wants to become ‘the most data-driven government in the world’. One of the initiatives they’re using to get there is called the Array of Things or AoT project. Mounted on traffic signal poles will be sensors that will measure everything from temperature and carbon monoxide to ambient sound intensity and pedestrian and vehicle traffic. The collection of all of this data will be used to improve quality of life in a variety of different ways – making Chicago healthier and more livable among other things.
Smart and safe cities are no longer a trend, but future of our urban areas. We’re just discovering the many different applications and forms this may take, but one thing that seems to characterize them all is their intention and purpose.
IFTACH DRORI, DIGITAL MARKETING MANAGER, QOGNIFY // MAY 09, 2018
How Situator mitigates a common incident for the utility industry
DANIEL LIBERFARB, GLOBAL VIDEO ANALYTICS SERVICE & PRODUCT MANAGER // MAY 23, 2018
Facial recognition technology can be an effective capability for security-conscious environments and has become increasingly common place. Organizations seeking to add facial recognition to their security mix have a long list of considerations and requirements.
The security market expects facial recognition technology to work with the existing surveillance camera infrastructure, including different angles and heights, different resolutions and different video qualities (due to environments variation).
Additional expectations include:
– The detection of people passing from different angles
– Automatic detection and alert of a wanted face (e.g. blacklist)
– Detection of faces in a crowd
Now let’s see how facial recognition and Suspect Search compare:
JAN TERJE BY, GUEST BLOGGER, RACOM // JUNE 04, 2018
Data alone is just that – data. But when structured, it creates knowledge. It provides a higher-level view beyond just the here and now. Where once we used data mostly to understand what has already happened, we can now use to not only predict what might happen but also how we can prevent it.
Risk is defined as the potential of losing or gaining something of value. With today’s technology’s ability to capture, correlate and structure massive amounts of data, organizations can know what might happen and the variables that may impact that event. It’s risk management at its highest level. And that’s why the utilization of big data has had such a transformative impact on security and operations. These are both areas whose focus is on smooth operations and ensuring business continuity through the elimination or mitigation of risk.
Understanding risk can dramatically impact the way you prepare and address it. For example, in a Smart City Solution, sensors can identify traffic congestion, that’s called Descriptive Analytics. All of the data surrounding the congestion and many others in the area can be captured and analyzed, and this is what we call Diagnostic Analytics. It tells us why or how something happened and allows us to identify indicators that could lead to similar events in the future.
A traffic congestion, a thing of the past?
Photo by Jeremy Yap on Unsplash
With this knowledge, data can be used to predict events. Predictive Analytics identifies potential probable events based on current conditions or indicators. If you can predict an event based on a deep understanding of the risks involved, then you extract Prescriptive Analytics. They let you know what you need to do in order to deter an event from happening by changing the conditions. Continuing with the traffic analogy, Prescriptive Analytics would advise traffic control that by diverting traffic at a particular time and place, probable congestion and accidents could be averted.
While the above example increases safety for citizens, data and analytics are highly effective in improving security by reducing risk. Let’s say a bank has been hit by a string of ATM robberies. The bank can use analytics to establish patterns or indicators based on time, severity, location, and methods. Once indicators are identified, the bank can deploy precise security measures, such as increased patrols during particular times or when certain indicators happen in efforts to apprehend the thieves.
This sequence of understanding can be applied to almost any scenario. At airports, analytics can identify gathering crowds that lead to delays, dissatisfaction and potential security risk and why and how they occur. If certain times of the day, week, weather conditions and shift changes are indicators of impending congestions, when these indicators converge, airport management can be alerted to take prescriptive or mitigating action.
In the same way that analytics are used by organizations to increase security or performance, they can be used to improve operations. With data coming from both internal and external sources, organizations can use it to ensure smooth operations and business continuity.
As we know, weather can cause massive disruption. Traffic is another disruptor. Combine the two and organizations can face being slowed down or even stopped by environmental conditions and staff’s inability to get to work. The impact of these conditions can be mitigated with advanced data analytics. For essential organizations such as rail, airport or critical infrastructure, understanding the risk based on external and internal data allows them to take prescriptive action based on predictive analytics. That might mean changing schedules, routes or ensuring staff availability by housing them on site in advance of the event.
Maintenance is sometimes a struggle for large organizations.
Photo by Michael Weidner on Unsplash
Data is what powers knowledge and insight. It is what allows us to improve overall conditions and better respond to singular events. The biggest obstacle to leveraging data no longer exists. We now have the computing power and technology to extract its value in the form of informational, descriptive, predictive and finally prescriptive analytics. And these contain the knowledge to save lives, costs, time and resources.
DANIEL LIBERFARB, GLOBAL VIDEO ANALYTICS SERVICE & PRODUCT MANAGER // JUNE 18, 2018
How various Qognify customers from various verticals use Video Analytics applications
EREZ GOLDSTEIN, DIRECTOR OF GLOBAL MARKETING, QOGNIFY // JULY 09, 2018
These days, when we think of safety and security we generally think of catastrophic type incidents, like terrorism, criminal acts, accidents or devastating weather events. They certainly deserve attention given the long-lasting impact and damage they cause – no one is disputing this. But, it’s the mundane, often preventable daily incidents that end up costing rail organizations much more.
The reality is, it is much more likely that debris on the tracks or a maintenance issue will cause a costly delay that an accident or criminal act. Yet much of the discussion around disruptions in rail transportation is focused on the less likely, major incidents. Let’s talk about the impact of daily disruptions that cause delays, which result in both revenue and reputation loss, in addition to potential fines.
Maintenance issues can cause costly delays
To put this subject into context, according to a UK National Audit Office report, in 2008, infrastructure failures accounted for 40,969 incidents and 3,040,686 minutes of delays. With a cost estimation of €101 per minute, per train, delays due to infrastructure failures cost the UK economy at least €300 million, that number has gone up since then.
That same report, also looked at rail fatalities, of which 78% are suicides. In an actual fatal scenario, a Gatwick Express driver reported striking an individual at 18:55 to the Network Rail Signaler. In turn, they notified the Operations Control Center (OCC), who stopped the area train service. From that point, a liaison from the railways interfaced with the Metropolitan Police who assumed initial control. That control was later handed over to the British Transport Police, which upon determining the incident was non-suspicious open some service but not all until 22:40. The total delay in minutes – from all stakeholders – was a considerable 5,758 or £600,000.
An Incident unfolding on the tracks in the UK:
The above two examples are very different. The first one, to a large extent, can be avoided. The second is completely unpredictable – other than the predictability of knowing it will unfortunately happen.
Rail organizations must first try to prevent incidents from happening at all. But, in the event of the uncontrollable, the response must be optimal to reduce the impact and cost.
Things will happen. Not always as tragic as a suicide, events as mundane as debris covering track signals can cause costly delays. If it’s not a question ‘if’ something happens, but ‘when’, mitigating impact is the next priority. That requires effective situation management – getting the right information to the right people at the right time.
A first step is to create a common operating picture via integrating technology. By integrating all systems and sensors, and by collecting all available information, with the capability to correlate all of the data, a clear, precise picture of any given situation can be established. The next step is to effectively communicate the relevant information to the relevant stakeholders. While some may require a complete overview, others may only require certain specifics.
In addition, the response should be coordinated and executed according to standard operating procedures and in compliance with all regulations. An automated and escalating guided response should be made available immediately, so that no matter who is sitting at the control or operators station, the response will be the most effective possible.
Implementing the right process, enforced and automated, which relies on fully integrated information has been shown to:
Being proactive about maintenance today is the standard. Rail organizations that want to improve the status quo are now using predictive intelligence to get proactive about their proactivity. What does that mean? If that same technology can identify anomalies that are precursors to failures and the railway responds to them prior to any disruption significant reduction in time and cost of failures can be avoided.
As noted, it’s the daily, mundane and sometimes tragic events that account for the real cost of delays and disruptions. While the catastrophic and generally unusual events get all the attention. It’s time to rethink the approach to rail operations and place more of an emphasis and attention on the preventable, predictable and their inevitable response.
OPHIR LEVY, DIRECTOR OF SALES AND BUSINESS DEVELOPMENT, EMEA // NOVEMBER 27, 2018
Facial recognition isn’t typically the best option for detecting and tracking a known, or unknown person of interest. Qognify’s, Ophir Levy, explains the crucial difference with Qognify Suspect Search, which does not require an ‘identity’ to rapidly search across multiple cameras (either in real-time or post incident). He also explains how, unlike facial recognition, Suspect Search can usually be deployed without the need for costly camera upgrades.
Learn more about Suspect Search here.