From $3 Trillion to $10 Trillion: How Privacy and Innovation Will Unlock the Future of SaaS

Explore the future of the SaaS industry and the role of privacy-enhancing technologies (PETs) in securing sensitive information, fostering trust, and unlocking the full potential of the rapidly growing $3 trillion SaaS market.

5 minutes read

Oct 25, 2023

The Software as a Service (SaaS) market has emerged as a game-changer for businesses, revolutionising everything from operations and communication to how businesses collect data and make decisions.

According to McKinsey, the SaaS market was valued at $3 trillion. If handled correctly, it’s predicted that it will surge to $10 trillion by 2030. This incredible potential could propel the future of various sectors and industries.

However, realising this growth demands addressing certain pressing concerns surrounding data privacy and trust in these services.

In this article, we explore how companies use customer data, the current trust solutions for the SaaS market, and how advanced privacy-enhancing technologies (PETs) could completely transform this industry.

The Role of Trust Brokers in Online Business

Consider the last time you made an online purchase. Did you think twice about the security of your payment information? If you saw that the website uses PayPal, Stripe or another secure payment system, then probably not. 

These platforms provide peace of mind. They make you comfortable with entering your card details and give you confidence that you will be billed for the items that you actually order, and not anything else. Globally, they've established themselves as trustworthy online payment processing tools in e-commerce transactions. 

With consumer trust in these platforms, even small websites can comfortably sell their products. You aren’t trusting the business, you are trusting reputable, secure payment systems which have been thoroughly scrutinised and audited.

Imagine you went to a website that didn’t have any guarantees or these payment systems wouldn’t exist in the first place. How can you trust that the payments will be processed securely? You cannot. 

You might be wondering: How does this tie into the SaaS sector and data privacy?

Understanding the Data Economy

The simple truth is, in recent years, users have become the product. The era of "generating value directly from customers" feels like a distant memory. The new game plan involves monetising customer data, making personal or behavioural data a sought-after asset.

Companies have discovered myriad ways to collect, analyze, and exploit user data. Experian, a leading consumer credit reporting company, is an example. They compile enormous amounts of data on individuals and then sell this information to other companies for various purposes.

Social media platforms like Facebook or Instagram have also capitalised on this goldmine—collecting data on users' interactions, interests, and connections—to create targeted advertising campaigns to boost users’ responsiveness to the ads. 

Similarly, the recent update to Zoom's terms of service—effective as of July 27—announced the company’s right to utilise some aspects of customer data for training and improving its machine-learning models.

While using customer data for profit has its advantages, it also presents significant potential risks, particularly in terms of data privacy and trust.

The Risks in the Age of AI

The rise of sophisticated AI technologies like GPT-4 has led to a surge in Large Language Model (LLM) applications across multiple industries. Great as they may be, LLMs bring about substantial privacy risks. 

When users input confidential data into LLMs, this data often becomes publicly accessible. These potential privacy risks require preventive solutions dedicated to user protection.

Numerous AI tools such as OpenAI's ChatGPT, Google's Bard, and Microsoft's Bing, as well as Midjourney and Stable Diffusion, rely extensively on internet content and personal data during their training phase—no matter how aggregated or anonymised they are. This has imposed several legal challenges in the generative AI industry.

What Is Next For SaaS?

Since SaaS models allow integrations with several 3rd party cloud-hosted software for shipping, forecasting, analytics, warehousing, accounting, project management, and more, each new integration poses a potential security breach. 

Just as you trust payment systems like PayPal to not steal from you, as illustrated at the beginning of the article, consumers and businesses alike require trust in the SaaS companies they choose to work with—particularly when it comes to handling sensitive data. 

To address this imbalance, legal frameworks like GDPR and CCPA have been introduced, alongside company privacy policies. However, these governance standards remain largely theoretical and their efficacy can only be ascertained through litigation. 

Currently, trust in SaaS companies is established through compliance certificates. Supplier organisations set their own rules—for example, every employee must use a VPN—and then implement systems demonstrating that these rules are in place. 

As a result, the current SaaS trust model is like a house of cards, vulnerable to collapse under the weight of a single privacy scandal.

The Challenges and Solutions in SaaS Data Security

An incident involving Samsung engineers inadvertently leaking proprietary code to OpenAI's ChatGPT highlights that though innovation and confidentiality can coexist, it isn't yet commonplace.

Serious businesses that understand the importance of data privacy and are cautious about sharing their data with suppliers. To maintain control over their data, they often opt for on-premises solutions or build their own systems if they have the resources. 

For instance, a financial institution might choose to create its own custom AI-based fraud detection system instead of relying on an external SaaS provider. By doing so, the institution ensures that the sensitive data involved in transactions never leaves its secured network, reducing the risk of exposure and reinforcing trust among its customers.

While building an in-house system gives a company more control over data privacy, it also requires a substantial commitment of resources, including time, money, and the need for skilled individuals to manage these systems.

Moreover, staying ahead with the newest developments in privacy and security technology may not be feasible for all businesses. Choosing this path could run the risk of lagging behind as maintaining state-of-the-art security becomes more complex and requires specialised expertise.

The Need for Privacy-Enhancing Technologies

Privacy Enhancing Technologies (PETs) might be the potential on-cloud solution we need to evolve our trust models. Unlike traditional models, PETs don't require blind trust in a system.

They enforce confidentiality constraints algorithmically, providing tangible and verifiable privacy protection. They ensure data opacity, guard against reverse engineering of sensitive data sources and the creation of a robust privacy infrastructure, enabling a what-you-see-is-what-you-get processing.

Secure Enclaves: Protecting Data in Use

Secure enclaves, also known as confidential compute or trusted execution environments, are a type of PET designed to shield data even during computation. They provide a protected execution environment within the processor's hardware, where data remains encrypted and off-limits to external processes, even when in use. This represents a significant innovation from traditional security measures that focus primarily on protecting data at rest or in transit.Intel's Software Guard Extensions (SGX) and ARM TrustZone are examples of hardware-level secure enclaves, and companies like Microsoft, Amazon, and NVIDIA are advancing this technology further.

Differential Privacy: Preserving Anonymity and Utility

Differential privacy is another ground-breaking development in data security. It's designed to protect privacy by adding carefully calibrated noise to a dataset, ensuring that no individual's information is identifiable while preserving the overall utility of the data. 

Implementation of differential privacy by the likes of Apple, Microsoft, LinkedIn, and Meta, and even the 2020 US Census, have earned this form of PET much attention in recent years. The focus now is on standardisation and making this technology accessible to non-expert developers in the industry.

The Future of SaaS

As advanced technologies revolutionise the SaaS business ecosystem, it becomes increasingly critical to address privacy concerns. The industry must work towards more robust trust models and prioritise compliance to ensure the safety and trust of users by implementing the latest technology innovations, such as PETs. 

By integrating innovative solutions like secure enclaves and differential privacy, the market can unlock its full potential, transforming business operations and data privacy for a brighter, more secure future.

We need a deeper understanding of these technologies so we can push the entire SaaS ecosystem towards a comprehensive privacy infrastructure that consumers and businesses can wholeheartedly trust. Ultimately, in cultivating this trust, the SaaS industry can harness its remarkable $3 trillion market potential.

Interested in finding out more insights into the future of SaaS? Watch the recording of the future of SaaS panel discussion from the Eyes-Off Data Summit 2023, where Christian Reimsbach (OECD), Jack Fitzsimons (Oblivious), Simon Gallagher (Microsoft), Paul O’Neill (Intel), and Phil Cheetham explored the importance of trust in SaaS providers, especially with regard to compliance, privacy, and data protection.

If you want to find out more about what we're building at Oblivious to ensure the Future of SaaS is private, visit Oblivious, or contact us at hello@oblivious.com

ai