Open Malmo (Image Credit: Axis Communictions)Open Malmo, the last leg of Axis Communications city summits, occurred last week. Delegates heard how physical security was increasingly being seen as part of the IT infrastructure. IP cameras and IoT solutions sit on the network, which means they have to be managed to protect the network.

At the same time, the ability of such devices to process data at the edge of the network opens up new use cases. They are being used to monitor production lines to identify problems in manufacturing. Another technology inside cameras is AI. It has had a significant impact on the security industry.

In a keynote session, Mats Thulin, Director of Core Technologies at Axis Communications, talked more about the intersection of AI and physical security.

AI is not a new technology for physical security

One potential benefit of using AI to power video analytics is greater accuracy. In healthcare, we have already seen how AI has improved cancer detection in imagery. Regarding video surveillance, Thulin said it would enable better identification and classification of objects, people, and events in real-time with a high degree of accuracy. It would allow security personnel to quickly identify potential threats and respond more effectively.

Mats Thulin, Director of Core Technologies at Axis Communications (Image Credit: LinkedIn)
Mats Thulin, Director of Core Technologies at Axis Communications

But there is a caveat here, and that is training. Some things are easy to detect, but there are other areas where Thulin believes the technology will take longer to be effective. One example is moving objects that pass behind and through others require extensive training.

Another key area where AI is influencing physical security is access control. AI-powered biometric authentication, such as facial recognition and iris scanning, provides a more secure and convenient alternative to traditional access cards or keypads. The AI algorithms powering these systems continually adapt and improve, making it increasingly difficult for unauthorized individuals to gain entry. A challenge here is dealing with bias in the training sets to improve accuracy.

Improved optimisation of resources and anomaly detection

AI is also being leveraged to optimize security operations and resource allocation. By analyzing historical data and real-time inputs, AI can help predict when and where security incidents are most likely to occur. This allows organizations to strategically deploy security personnel and equipment to the areas of greatest need, enhancing overall effectiveness.

The use of historical data and anomaly detection can also detect attempts to gamify systems, such as probing perimeters to test response times. Such incidents can be hard to detect and, in a physical world, often lead to a delayed response after numerous false positives. AI can analyse the data, detect where such actions are taking place, and provide a more appropriate response.

Integrating AI with Internet of Things (IoT) devices is another transformative trend in physical security. AI-enabled sensors and smart cameras can autonomously monitor a facility, detect anomalies, and trigger automated responses like locking doors or activating alarms. This level of interconnectivity and intelligent automation enhances the speed and precision of security measures.

Ethics, privacy and compliance are hot issues

One of the challenges presented by AI in physical security is the need to address ethical and privacy concerns. As these systems become more advanced, concerns about the potential for misuse or abuse of the data collected increase. Robust data governance policies and transparency around AI decision-making processes are crucial to maintaining public trust.

Additionally, the reliance on AI-powered systems introduces new vulnerabilities that must be addressed. Cybersecurity threats, such as hacking or data manipulation, can compromise the integrity of these systems. They can potentially render them ineffective or even dangerous. Rigorous cybersecurity measures and ongoing system monitoring are essential to mitigate these risks.

GenAI brings risk

Training AI to examine video and still images more closely is not a new science, and the benefits are already evident. However, extending that to encompass GenAI and its response to questions asked of the data raises concerns. Thulin admitted there were concerns over hallucinations in the data.

There are also worries about how GenAI could potentially manipulate data to answer questions. This is already being seen in the wider use of GenAI, where answers can be less than truthful. Two reasons for that are the training sets and GenAI’s interpretive nature.

The training set for video surveillance should be limited to what has been captured. However, when seeking greater understanding or analysis, organisations may use external data. They will have no control over that data or its quality. When used to enhance their surveillance data and with GenAI, they must ensure that it does not misinterpret the images.

Similarly, GenAI’s conversational nature may lead to prompts that cause the AI to return false positives. With the focus on compliance and privacy, organisations must implement verification processes to avoid this.

Enterprise Times: What does this mean?

There is no question that AI’s impact on physical security has been profound. It offers enhanced capabilities in areas like video analytics, access control, and operational optimization. However, the responsible and ethical deployment of these technologies remains a critical priority for the industry.

It’s refreshing to see physical and cybersecurity being discussed as part of the same solution. The technology overlap has existed for decades but is too often dismissed. One reason is that they are often owned by different parts of the business. Yet, IP cameras and IoT security systems often sit on the same physical network as IT systems and use storage systems operated and controlled by IT.

Addressing that disconnect will be challenging for many organisations. It requires rethinking departments and even budgets. However, the IT systems used by security teams are already part of the IT department, so this is not insurmountable.

As AI evolves, security professionals, the compliance team, and IT must work together to address ethical, privacy, and security challenges. Enterprise Times will soon publish a podcast with Thulin in which he discusses the impact of AI on physical security.

LEAVE A REPLY

Please enter your comment!
Please enter your name here