The Power of Generative AI – Photo by Steve Johnson on Unsplash

Generative AI is the big trending topic right now, and understandably is featuring prominently in the news. The popularity of platforms such as Open AI’s ChatGPT, which set a record for the fastest-growing user base by reaching 100 million monthly active users just two months after launching is unquestionably on the minds of businesses globally.

These tools can increase productivity and efficiency by automating repetitive tasks and letting employees focus on higher-value work. They can foster enhanced creativity and innovation by assisting in brainstorming and ideation processes to generate novel solutions to complex problems.

Today, AI powered applications have already been well-documented in fields such as eCommerce, security, education, healthcare, agriculture, gaming, transport, and astronomy. The business, productivity, and efficiency gains that it provides these industries are enabling them to flourish and open up new revenue streams.

But while generative AI tools bring a world of possibilities, they also open the door to some complex security concerns. For example, generative AI often requires access to vast amounts of sensitive data, which poses significant data privacy and protection challenges. Mishandling of, or unauthorized access to, these datasets can lead to breaches, regulatory penalties, and damaged reputations.

Using generative AI safely

To this point, it was great to be both a sponsor and a participant at Zenith Live in Las Vegas last month. Attended by security architects, CIOs and CTOs, including Zscaler’s EVP and Chief Innovation Officer Patrick Foxhoven. He talked about the potential risks associated with AI and was quick to point out that AI is not new to Zscaler; explaining the company has been leveraging the technology for many years now. While agreeing it does have the potential to change everything.

However, he also stated that the same generative AI capabilities could enable both deepfakes and data loss. Patrick talked about the importance of enabling customers to use generative AI safely and how Zscaler has added a new URL category and cloud app for tools like Bard, ChatGPT, and others. This allows admins to finely control who can access these tools and enforce browser isolation to protect against sensitive data being uploaded.

Getting smart about cyber risk and investment

Zscaler also provides risk scores for commonly used apps to determine if their AI integrations pose a threat based on the application’s security posture and data retention policies. Furthermore, AI insights generated by Zscaler’s new Risk 360 platform can help security teams prioritize, isolate, and implement policies for preventing future process iterations.

Zscaler Risk 360 is a comprehensive tool designed to help security leaders quantify and visualise cyber risk. It looks at an organization’s security posture based on data and analytics. It then enables them to build a risk profile based on their security posture and better understand the financial implications of cyber risks.

What I feel is particularly beneficial about this tool for customers is that it can be used to help fund projects because it enables security leaders to be smarter about where they put their dollars and invest. It also enables them to have a meaningful dialogue with the board and secure funding based on the insight that demonstrates a breach’s impact.

Will AI steal our jobs?

But many are cautious, even highly concerned about AI. One concern is that AI will take our jobs. However, IBM has reassured us that the day when AI completely replaces humans is a long way off. That said, the US actors’ union had 160,000 members on strike in July 2023, afraid that AI will lead to far fewer employed actors in the future as studios use AI to create “digital twins” of actors.

Likewise, AI is a big issue for writers, especially with ChatGPT being used to write everything from law school and business school papers to legal briefs, with varying degrees of success. And winning limits on AI is an issue for the Writers Guild of America, which has been on strike against studios and streaming services since May.

There are a wealth of industry predictions on the impact that AI will have on society between now and 2030. However, the speed at which AI has started to impact our everyday lives makes me think that these predictions will occur well before 2030. Who knows what the applications of AI will look like next year, let alone in six-and-a-half years.

Ultimately, AI presents a wealth of opportunities and challenges for individuals, organizations, and governments around the globe, and it will be interesting to see how AI continues to evolve in the months and years ahead and whether it is viewed as a threat or an opportunity to innovate by businesses.


XalientXalient addresses the challenges large global enterprises face around networking and security. Headquartered in the UK and with offices in the USA, Xalient counts Kellogg’s, Hamley’s, WPP and Keurig Dr Pepper among its clients. It was established eight years ago to disrupt the traditional markets for secure networking, taking advantage of the huge shift to cloud technology that has created high demand for flexible, cost-effective global connectivity and protection against increasingly complex cyber threats.  Combining transformative, software-defined network, security, and communication technologies with intelligent managed services with its AIOps Platform- Martina and driving Zero Trust initiatives that keep the world largest brands more resilient, adaptable and responsive to change.

LEAVE A REPLY

Please enter your comment!
Please enter your name here