• AI Confidential
  • Posts
  • Confidential AI Creates New Business Opportunities and Revenue Streams

Confidential AI Creates New Business Opportunities and Revenue Streams

The technology doesn’t just enhance security—it boosts business

Hi all,

The European Union's AI regulation, the Digitial Operational Resilience Act, marked a significant shift in the landscape of artificial intelligence governance, emphasizing stricter oversight and accountability for AI systems specifically in the financial services industry. With these regulations set to go into effect in 2025 and others on the horizon, organizations are already making strategic changes.

Unlike traditional data obfuscation techniques, which have become increasingly vulnerable to sophisticated AI attack vectors, confidential computing (and the solutions that run on it) offers robust protection by isolating sensitive data during processing. This ensures that data remains encrypted and inaccessible to unauthorized entities while data is in use, addressing the critical need for heightened security in the face of advanced AI threats.

But confidential computing is so much more than a security enabler—we’re increasingly seeing it become a catalyst for new business models and revenue streams. 

By integrating confidential AI into their workflows, organizations can not only safely explore AI-driven solutions and accelerate AI models into production—they can also discover insights and opportunities that were previously unimaginable and out of reach due to privacy concerns. 

In this issue, we're delving into a real-world example of a financial services provider that’s already benefiting from confidential AI, showcasing its immense potential to tap into new value. By demonstrating these success stories, we aim to inspire more organizations to adopt confidential AI and fully realize its benefits.

— Aaron Fulkerson, CEO, Opaque Systems

Confidential AI Creates New Business Opportunities And Revenue Streams

Confidential AI is not just a security measure. It’s unlocking entirely new business models and revenue streams, especially as AI applications grow. That’s according to Mike Bursell, Executive Director of the Linux Foundation Confidential Computing Consortium, and Mark Russinovich, Chief Technology Officer of Microsoft Azure, speakers at this year’s Confidential Computing Summit hosted by Opaque in June earlier this year.  

Confidential computing addresses one of the most critical challenges businesses face today: protecting sensitive data while moving AI models into production. While traditional security measures are effective at keeping data safe when it’s stored or transmitted, confidential computing extends this protection to when data is in use. But this capability isn’t just an added value. It’s an opportunity to push data—including the data feeding AI—further than it’s gone before. 

“Confidential computing enables new things, new opportunities, new business models,” Bursell said by unlocking new use cases that have been unavailable due to the sensitivity of data or regulations. Estimates of sensitive data, requiring confidential data pipelines, range from 30-60% of data in the enterprise. 

The Royal Bank of Canada is exploring confidential AI to enhance targeted advertising for their loyalty customers. By combining data from credit card transactions and merchant purchases in a secure environment, RBC can apply AI-powered analytics to gain deeper insights into consumer behavior without compromising privacy, Russinovich explained. That’s confidential AI in action: “They can see that a customer who bought an airline ticket might be preparing for a trip and could benefit from targeted offers for travel-related purchases.” 

Confidential AI allows RBC to merge sensitive data sets securely, ensuring that customer privacy is maintained while still gaining valuable information. And with that assurance of privacy and trust, historically-hesitant industries, including financial services, are boldly moving to the cloud. Many enterprises are seeing Conf Computing as a mechanism to provide burst compute capacity in the cloud without concern for data privacy or sovereignty. 

Confidential computing provides a level of protection that can alleviate security concerns, making it viable for all sectors to fully leverage cloud computing, Russinovich said. He further suggested that, one day, due to confidential computing, data sovereignty could hold the encryption key rather than the physical location of processing. This vision aligns with the recent Digital Operational Resilience Act (DORA) regulation in the EU, which emphasizes the importance of robust security measures and operational resilience in digital services.

Watch the full interviews with Mark Russinovich and Mike Bursell below

In the Lab

The Latest Happenings at Opaque Systems

Conversations at Confidential Computing Summit

We interviewed the leading minds in confidential computing questions about opportunities and innovation in the industry at our Confidential Computing Summit. Find all the recordings on our YouTube page and on the event Interviews page.

Gen AI’s Impact on the Future of Work

Associate professor at UC Berkeley and Opaque Systems co-founder Raluca Ada Popa delivered a keynote address at Accenture's TechStar virtual graduation, celebrating the latest cohort to complete the program. Addressing over 2,000 attendees, she highlighted how generative AI drives innovation and equips leaders with essential insights, shaping the future of work. 

Keeping Government Data Secure

Jason Lazarski, Head of Sales at Opaque Systems recently joined Jonathan Ring, deputy assistant national cyber director for technology security at The White House Office of the National Cyber Director (ONCD), on Intel’s InTechnology podcast. Alongside episode hosts Taylor Roberts, director of global security policy at Intel, and Camille Morhardt, director of security initiatives and communications at Intel, they discussed how enterprises can keep their most sensitive data secure during multi-party collaboration or when developing AI models.

Securing AI in the Enterprise: Privacy-Preserving Technologies and Use Cases

How are privacy-preserving technologies revolutionizing data protection and encryption? Aaron Fulkerson, CEO, and Rishabh Poddar, Co-Founder and CTO of Opaque Systems, discuss this landscape, starting with the state of data protection and encryption. They also describe the broad application of Opaque's Confidential AI Platform and the variety of customer use cases across industries. For more, download our latest whitepaper, Securing Generative AI in the Enterprise

Code for Thought

Worthwhile Reads

🏋️‍♀️ AI surveillance at Paris Olympics sparks data privacy flags: When the 2024 Paris Olympics kick off on Friday, July 26, thousands of athletes and spectators will gather in the city for two weeks. The French government plans to deploy numerous security measures, including AI-powered surveillance that will detect unusual crowd movements. But experts are questioning the technology’s privacy safeguards around the personal data it captures. The Olympics could be a vital moment to showcase the potential AI surveillance holds for physical safety—but what about digital safety? Confidential AI could guarantee built-in data privacy for similar programs in the future.

⛓️‍ Google’s acquisition of cybersecurity startup Wiz falls through. Israeli cybersecurity startup Wiz has ended discussions of a $23 billion deal with Google, which would have been the tech giant’s largest acquisition ever. Wiz will instead shift focus to a public offering. The startup provides cloud-based cybersecurity solutions for companies to eliminate risks on cloud platforms that run on AI. The deal falling through will likely hinder Google’s progress in cloud infrastructure investment, as the company has been seeking clients to build more secure operations in the cloud.

🇧🇷 Meta pauses Gen AI programs in Brazil due to privacy concerns. Meta has halted the use of its gen AI tools in Brazil after the country’s National Data Protection Authority (ANPD) enacted a new ban on the technology. The suspension doubts the validity of Meta's privacy policy for using personal data to train gen AI systems, following an event the company hosted that debuted an AI-powered ad targeting program on Meta’s WhatsApp chat service. Brazil has the second-largest user base of WhatsApp, and its users may not have access to its AI tools unless Meta revises the personal data processing section of its privacy policy. 

🔍 Spate of data breaches spotlight lack of baseline security controls. Widespread customer data breaches at major corporations—recently AT&T and UnitedHealthcare—highlight an absence of basic security measures at enterprise companies. The two companies join a rising number of organizations with hacks attributed to a lack of multifactor authentication. These high-profile breaches are fueling a need to implement more robust data security solutions into products from the start—solutions that could include confidential AI to keep data safe.

Reply

or to participate.