You need look no farther than Microsoft’s early efforts on its Security Copilot service to gain an understanding of the true promise and real-world pitfalls of generative AI.
Early in 2023, the biggest software manufacturer in the world unveiled Security Copilot. One of Microsoft’s most significant new AI products, it uses an internal model and GPT-4 from OpenAI to respond to inquiries about cyberthreats in a manner akin to ChatGPT.
According to an internal Microsoft presentation from late 2023, the path to deploying this was difficult but also contained encouraging disclosures about the potential of this new technology. An extract of the presentation that Business Insider was able to receive provided some insight into the development process of this significant AI product.
According to partner Lloyd Greenwald of Microsoft Security Research, during the presentation, Microsoft was initially developing its own machine-learning models for security use cases.
Petabytes of security data were involved in the program, but Greenwald claimed that it halted due to a lack of processing resources when “everyone in the company” was using Microsoft’s limited supply of GPUs to work with GPT-3, the predecessor to GPT-4.
Then, he claimed, in the audio that BI was able to secure, the software behemoth was given early access to GPT-4 as a “tented project.” That’s the name given to a project where Microsoft imposes strong access controls.
Microsoft then turned its attention from its own models to GPT-4 in an attempt to explore what it could accomplish in the cybersecurity arena.
“We presented our initial explorations of GPT-4 to government customers to get their feel and we also presented it to external customers without saying what the model is that we were using,” Greenwald stated.
The main argument of the pitch was the advantages of utilizing one global AI model as opposed to several different models.
According to Greenwald, Microsoft still maintains a number of specialized machine-learning models for handling particular issues. These include supply chain attack detection, compromised account detection, and attack campaign attribution.
“The difference is if you have a large universal model or a foundation model that they are called now, like GPT-4, you can do all things with one model,” he stated. “That’s how we pitched it to the government back then and then we showed them a little bit about what we were trying to do.”
The capabilities Microsoft first demonstrated to the government, according to Greenwald, were “childish compared to the level of sophistication” the business has already attained.
Read More: Click Here