Backing Founders – Egregious

Curious about how AI is shaping the future of online dynamics? AI-powered bots and agents are a powerful tool but they also pose a growing threat.

Watch our latest #BackingFounders episode with Rupert Small, PhD, founder and CEO of our portfolio company Egregious, and discover how they are working to help companies and organisations navigate the opportunities and risks.

Transcript

David [00:00:00]:

Hello and welcome to Backing Founders. I’m David Mott from Oxford Capital, and today I’m delighted to be joined by Rupert Small, the founder of Egregious.

Rupert, I’ve got to start with this name—tell us a little bit about how the name came about and what Egregious is.

Rupert:

Yeah, good question. We’re asked it by many people because it is a very unique name.

In its modern form, egregious means outstandingly bad, which isn’t exactly a word most people would reach for when naming their company. But in its archaic form, it meant remarkably good.

It’s one of those quirky words that have transformed in meaning over time, reflecting how culture and language evolve. We love that duality—it speaks to the nature of meaning itself.

It also ties into our relationship with AI. AI is going to do remarkable things. It will give humans superpowers, transforming our lives in incredible ways. But at the same time, it has the potential for harm. AI will be used to deceive, scam, and manipulate people in ways we haven’t seen before.

So, the name Egregious calls out both the opportunities and the dangers of this new technology.

David:

Thank you. We’re really going to dive into this today—looking at both the potential and the risks of AI.

Just tell us a little bit more about what Egregious does. What’s your approach, and what market are you addressing?

Rupert:

We essentially help answer the question: What’s happening online right now?

Or, more often: What on earth is happening online right now?

The internet has become chaotic, with bizarre, polarizing, and often outrageous content surfacing constantly. This is partly driven by the commercial model of most online platforms—engagement farming. Their goal is to keep people online by any means necessary, sometimes through rage-bait and polarization.

At the same time, the world itself is becoming more unstable and divided.

We help organizations understand this digital landscape—how it impacts their business, operations, customers, and even their products.

David:

Let’s get practical. Say I’m a big corporate in the consumer space. How might a company use Egregious? What kind of threats could they be facing?

Rupert:

If you have a brand or product in the market, you care about how it’s performing. Are people engaging with it in a meaningful way? Are customers becoming advocates for your brand?

Our job is to analyze that. We monitor discussions happening in the open digital world—understanding where your product is being talked about, how it’s impacting consumers, and what’s influencing their behavior.

This insight helps businesses refine their strategies, address potential risks, and adapt to shifting consumer sentiment.

David:

What other kinds of companies and organizations do you partner with?

Rupert:

The online world is a shared space, so our services are industry-agnostic.

We work with everything from major consumer brands to government agencies and national security organizations. They all need to understand digital conversations, the risks involved, and how those risks relate to their operations.

David:

Your website uses strong language—terms like countering deception, disinformation, misinformation, and polarization (DDMP). How big is this threat, and how aware are companies of it?

Rupert:

The threat is enormous—and growing.

A useful analogy is cyber security. When the internet was first built, we realized that connecting computers also enabled viruses to spread. That led to the rise of the cyber security industry, which protects our hardware and systems from harmful code.

Now, we’re dealing with cyber AI—AI that can manipulate information, deceive people, and shape narratives online. The challenge is no longer just about protecting computers but protecting people from AI-driven deception.

This industry is still in its infancy, but it will grow massively in the coming years.

David:

With AI becoming embedded in our daily lives, this issue is only going to get bigger.

How does Egregious plan to apply its business model across different industries? How are you going to make money?

Rupert:

We look at this through metaphors.

Two decades ago, when social media took off, online intelligence was built around the buying and selling of attention. Marketers were the buyers, and social media platforms were the sellers.

But the internet has fundamentally changed since then. Instead of just two major platforms (Facebook and YouTube), we now have hundreds. The landscape is more fragmented and volatile than ever—look at how platforms like Truth Social and Elon Musk’s Twitter acquisition have shaken things up.

Add to that agentic AI—AI that creates and spreads its own narratives—and suddenly, traditional tools for monitoring online trends are no longer sufficient.

So our model is simple: The internet has changed, AI has changed how we interact with it, and businesses need new tools to navigate this new reality. That’s where we come in.

David:

Let’s talk about your own background. You did a PhD in mathematics and studied physics. How did that lead to founding Egregious?

Rupert:

It’s been a quirky path—from maths and physics to countering deception.

Like many in this space, I started in data science and AI. A decade ago, the AI industry was just emerging as a commercial field. Now, it’s one of the most dynamic industries in history.

When I first encountered AI, I saw it as a gold rush—a frontier that would reshape the future. It was clear even then that AI wouldn’t just transform industries; it would shape culture itself.

David:

You worked at Improbable. What did you take away from that experience?

Rupert:

A lot—one lesson being don’t raise money from SoftBank too early unless you have a clear exit plan.

Improbable focused on scaling virtual worlds—creating massive, dense simulations using AI and computing power. That experience made me think deeply about the future of social media and digital interactions.

Would the metaverse replace social media? Would it merge with the physical world in a Snow Crash-like cyberpunk reality?

And once AI was introduced into that mix, the question became: Are we still in control, or is the tail wagging the dog? That led me to focus on the risks AI presents—and ultimately to founding Egregious.

David:

Finally, paint us a picture of the future. Where do you see Egregious in a few years?

Rupert:

I’d actually flip that question.

When you ask what something looks like, you assume it’s visible. But our goal for Egregious is to be imperceptible—infrastructure that seamlessly protects AI-human interactions.

Think of Cloudflare, which operates in the background to keep the internet secure. We want to be the same for AI—ensuring AI tools are safe, reliable, and free from deception.

If we do our job right, AI will be something people can trust. They’ll use it without fear of manipulation or misinformation. That will unlock AI’s full potential—because every great technology needs guardrails to prevent it from turning against us.

David:

On that note, I just want to say how thrilled we are to be investing in and partnering with Egregious.

We can’t wait to see what you do next. Thanks for speaking with us today, and we look forward to checking in on your progress in the future.

Rupert:

Thanks so much, David.

Estimated reading time: 2 min

 

Due to the potential for losses, the Financial Conduct Authority (FCA) considers this investment to be high risk.

What are the key risks?

  1. You could lose all the money you invest
    1. If the business you invest in fails, you are likely to lose 100% of the money you invested. Most start-up businesses fail.
  2. You are unlikely to be protected if something goes wrong
    1. Protection from the Financial Services Compensation Scheme (FSCS), in relation to claims against failed regulated firms, does not cover poor investment performance. Try the FSCS investment protection checker here.
    2. Protection from the Financial Ombudsman Service (FOS) does not cover poor investment performance. If you have a complaint against an FCA-regulated firm, FOS may be able to consider it. Learn more about FOS protection here.
  3. You won’t get your money back quickly
    1. Even if the business you invest in is successful, it may take several years to get your money back. You are unlikely to be able to sell your investment early.
    2. The most likely way to get your money back is if the business is bought by another business or lists its shares on an exchange such as the London Stock Exchange. These events are not common.
    3. If you are investing in a start-up business, you should not expect to get your money back through dividends. Start-up businesses rarely pay these.
  4. Don’t put all your eggs in one basket
    1. Putting all your money into a single business or type of investment for example, is risky. Spreading your money across different investments makes you less dependent on any one to do well.
    2. A good rule of thumb is not to invest more than 10% of your money in high-risk investments. https://www.fca.org.uk/investsmart/5-questions-ask-you-invest
  5. The value of your investment can be reduced
    1. The percentage of the business that you own will decrease if the business issues more shares. This could mean that the value of your investment reduces, depending on how much the business grows. Most start-up businesses issue multiple rounds of shares.
    2. These new shares could have additional rights that your shares don’t have, such as the right to receive a fixed dividend, which could further reduce your chances of getting a return on your investment.

 

If you are interested in learning more about how to protect yourself, visit the FCA’s website here.