• AI Confidential
  • Posts
  • AI Use Cases Will Drive Confidential Computing Ubiquity

AI Use Cases Will Drive Confidential Computing Ubiquity

Digging deep into the insights from industry experts at Confidential Computing Summit.

Hi Friends,

Our second Confidential Computing Summit is soundly in the rearview mirror. And now that I’ve had time to reflect, I am more convinced than ever that we are at a pivotal moment in the evolution of technology. The insights and innovations showcased at the summit have painted a vivid picture of a future where all computing is inherently confidential.

But we have some work to do before we get there.  

We conducted more than two dozen interviews at the event, speaking to some of the brightest minds at the forefront of innovation in our industry: Karthik Narain, group chief executive, technology at Accenture; Mark Russinovich, CTO at Microsoft Azure; Mark Papermaster, CTO and executive vice president at AMD; and Jason Clinton, CISO at Anthropic—just to name a few. We asked them about the most promising use cases of confidential computing, what it’ll take for the technology to become table stakes, and of course, the challenges that stand in our way. 

Over the next few weeks, I’m excited to share the trends and lessons gathered through these conversations, starting with our interview with Leader of Confidential Computing and Encryption at Google and Google Cloud Nelly Porter below.  

If there’s one thing I heard again and again, it’s that we, as an industry, need to continue to focus on telling compelling stories about the value confidential computing brings, coupled with supporting the developers who will drive this change. It’s also on us here at Opaque to make sure the user experience we deliver is top-notch.

Stay tuned, stay curious, and let’s continue to innovate and inspire. 

-- Aaron Fulkerson, CEO, Opaque Systems

To Work, Confidential Computing Should Be Invisible: Q&A With Google’s Nelly Porter

"Confidential computing provides the privacy boundary needed to safely use sensitive data," Nelly Porter, Leader of Confidential computing and Encryption at Google Cloud, shared at Opaque’s Confidential Computing Summit. But for confidential computing to be truly effective, it must also be invisible. 

"People don't want to spend so much effort to modify their applications. The workloads for them need to be seamless and effortless to adopt," she explains.

The goal is to integrate confidential computing so smoothly into existing systems that it operates without requiring significant changes or additional effort. Invisibility is crucial for widespread adoption.

Google has been at the forefront of making confidential computing powerful and “invisible.” The company has invested years into developing this technology, focusing on scalability and ease of use, embedding it into various hardware and software solutions. That includes integrating it into CPUs, GPUs, and other services.

Confidential computing is on an inevitable path to ubiquity, Porter said, echoing the general consensus at the summit. The industry is increasingly rallying together through initiatives like the Confidential Computing Consortium, where partners set standards and drive adoption. Plus, use cases are growing as is awareness around its potential. 

In finance, for example, confidential computing can help prevent fraud by allowing banks to securely share data and detect suspicious activities without compromising security.

"Companies can now solve problems that were previously impossible due to privacy concerns, like detecting fraud without sharing sensitive data," Porter notes. 

Moving forward, as privacy regulations tighten and data breaches become more costly, the demand for secure computing environments will only heighten. In an ideal scenario, confidential computing will soon be as common and “invisible” as encryption, Porter says: “It's about making secure data processing the default.” 

In the Lab

The latest happenings at Opaque Systems

Product Demo: Sensitive Data Sharing and Reporting

Opaque’s product can share threat incident data to aggregate metrics and produce threat landscape reports—without sharing individual threat incident data sets. Watch a demonstration of how the platform works here.

Founder Spotlight: Ion Stoica

The rise of privacy concerns around confidential data and the appetite for more and more data don’t have to be at odds, says Ion Stoica, Professor UC Berkeley, ​Executive Chairman Databricks & Anyscale,​ and Co-founder & Board Member of Opaque Systems. Providing a solution for these divergent trends is what drives his passion for confidential computing.

Code for Thought

Worthwhile reads

📝 EDPS unveils new genAI guidelines. The European Data Protection Supervisor (EDPS) released new orientations for European institutions and agencies around how they should be using genAI securely, with a particular focus on data protection. The EDPS orientations—which define what genAI is and how to understand if it involves personal data processing—are the authority’s first step to more detailed guidance that will take genAI’s rapid growth into account. 

🏥 Kaiser Permanente delays breach disclosure. Enterprises that use AI-powered tracking technologies may want to evaluate the security of those tools amid one of the latest high-profile data breaches. In a letter to customers on May 31, Kaiser Permanente revealed that a user data breach occurred in October 2023. The healthcare company said it may have transferred customer data, such as names and IP addresses, to Google and Microsoft through tracking technologies on its website and apps. While the U.S. Department of Health and Human Services requires healthcare companies to report leaks within 60 days, the company waited five months, reporting the incident in April.

⚖️ NIST releases final version of updated genAI guidelines. The National Institute of Standards and Technology (NIST) has released a final version of its AI Risk Management Framework Generative AI Profile, following its first iteration in April. The profile provides organizations with a set of guidelines that promote responsible use and development of AI systems. NIST’s recommendations include disclosing use of generative AI to end users; aligning genAI use with relevant laws and policies, including those related to data privacy; and conducting regular audits of AI-generated content. 

Reply

or to participate.