Contact Us
Whether you're an investor, investee or a part of a leadership team seeking to increase value creation, our strategic consultants are on hand to guide you through your leadership journey. Contact us today.
May 7th, 2025
Lokdeep Singh and Ruby Sheera
Since the emergence of ChatGPT in 2022, AI has progressed at breakneck speed, and now every business needs an AI strategy. PE-backed software companies, or those seeking PE backing, are racing to define how and where they should deploy the technology to avoid getting left behind. For private equity investors, the challenge is understanding the impact it will have on current and potential investments, as well as what to look for in terms of adoption and readiness, to mitigate the disruption risk in the future.
Here we debunk some AI myths and highlight the key issues investors and portfolio companies should focus on.
About Lokdeep Singh
A B2B software specialist, Lokdeep Singh, spent the first six years of his career as a telecommunications software engineer before joining WorldCell Inc., a Washington, D.C.-based startup, which was later acquired by private equity. Having experienced the entire business lifecycle and value creation journey, he then took on numerous CXO roles within PE-backed businesses, including as CTO at MACH in Luxembourg (acquired by Syniverse), Chief Innovation Officer at Syniverse, Chief Product Officer & General Manager at Openwave Messaging Inc., General Manager and SVP Messaging at Synchronoss Technologies, and then as CEO at Talkwalker. He is currently a senior advisor at Providence Equity Partners.
About Ruby Sheera, Managing Director, Tech and Tech-Enabled Practice
Ruby leads on searches within PE-backed and high-growth technology and tech-enabled businesses. Ruby has undertaken multiple searches on behalf of clients to maximise the opportunity that digitisation presents across various B2B and B2C sectors.
Q&A
AI covers a range of tools and technologies, but there is much confusion over the various terminology used in the context of AI. Could you start by providing a ‘dummy’s guide’ to AI, as it is being used today?
The Holy Grail of AI is to achieve the so-called singularity, where computers can mimic the human mind for speed and reasoning. However, while we have made significant progress in the last four or five years, we're not yet there.
When I was initially exposed to AI, it was called machine learning and involved feeding high volumes of data into a machine, spending months or longer training the algorithms through an iterative process. While Machine learning is mainly focused on pattern recognition and predictive analysis, AI has undergone a pivotal change with the launch and rapid adoption of generative AI (genAI) capabilities, which enable the production of new content that mimics human intelligence and creativity. These capabilities are enabled by foundational large language models such as ChatGPT, Google Gemini, Claude, and others.
In basic terms, LLMs are models or algorithms that are pre-trained on vast amounts of data, such as all the information published online. They differ from machine learning in that the model typically doesn’t require further training and features a user interface that communicates in natural language, answering questions and providing generally reliable responses. LLMs support zero-shot learning capabilities, which, in simple speak, means they can recognise new concepts even when these haven’t specifically formed part of their training, enabling human-like creativity and reasoning.
The evolution is continuing in two ways. Firstly, genAI is evolving from understanding text to a more general understanding. For example, a new tool from OpenAI called Sora can generate accurate and convincing videos based on a user description, effectively showing that it must understand the laws of physics.
The next goal is so-called agentic AI, which means that instead of simply answering questions, AI can carry out tasks from start to finish. For example, if you want to arrange a meal, it will find a suitable recipe, then order and pay for the food, in a fully automated workflow.
Are fears about AI’s potential to disrupt the software sector and value creation justified? If so, what are the biggest concerns for private equity?
The adoption curve for AI isn’t slow and gradual; it's happening now and is very real. So, in that sense, the urgency and fear are justified. The biggest ‘fear factor’ is AI’s potential for disruption over a five-year hold period. Every company needs to be discussing AI, and, for every potential investment, investors need to consider how that vertical may be disrupted.
Investors must try to predict the winners, which will come in waves. So, the first wave is comprised of infrastructure players, including the large language models that are already being disrupted by the likes of DeepSeek. It's been challenging for PE, aside from the largest funds, to participate in that market because it requires such a significant investment.
The second wave appears to involve SaaS companies that become early adopters of AI. Satya Nadella, Chairman and CEO of Microsoft, argued that AI will ultimately replace SaaS. Still, I don’t believe that’s a given if SaaS companies are proactive in adopting AI and embedding it in their products. The current wave of SaaS innovation presents both risk and opportunity for private equity.
Then, the third wave, which is currently being discussed, is agentic AI, where AI isn’t just a tool or assistant, but can execute an end-to-end task. When that becomes a reality, it will undoubtedly start to compete with some of the simpler SaaS use cases, so it is something to consider for the not-too-distant future.
Hype rather than fundamentals has often driven AI valuations, making it difficult to assess the technology’s true impact in a rapidly changing landscape. How does a fund navigate value uncertainty?
All SaaS businesses are ripe for disruption by AI, and some categories are more vulnerable than others, such as martech, content management, publishing, and ed-tech. This is the first consideration for investors. In some cases, companies have even begun to see the impact of AI on their pipeline, as customers consider whether they can bring tasks in-house using AI. This is leading to a longer sales cycle, especially for SaaS companies without an AI roadmap.
For investors considering a company or vertical under threat of disruption, they should ensure that the company has a well-documented and operational AI strategy, including an assessment of the threats and opportunities to drive innovation, efficiency, and value by adopting and embracing AI within the business. Don't be satisfied with an “AI-enabled” label on the website and marketing collateral.
Secondly, validate that the management team has a plan to ensure organisational readiness, covering how well AI is integrated into the company culture; what the workforce is doing differently; and how the workflow has evolved to deliver a margin. The key is assessing how mindsets have changed and ensuring they’re going in the right direction.
What levels of AI strategies and usage should investors look for in different businesses?
Given AI’s rapid development, every company, regardless of its vertical, should utilise AI. There are internal use cases for every company, and if a company is not on that path, it'll be left behind.
The next stage involves companies experimenting with efficiency gains on both the product and customer sides, as well as in R&D and quality improvement. Then, if a company has big data or analytics as part of its product offering, it should look to develop its own smaller but more fine-tuned LLMs, specific to its industry and vertical.
What an investor should be looking at, and what an operator should be doing, is putting an AI road map in place to prioritise experimentation and adoption use cases, based on the type of business and where the threats and opportunities are.
What scalability challenges should investors be looking for on the technology and infrastructure side?
Scaling technology infrastructure will become increasingly easier as computing power becomes more affordable and technology advances. DeepSeek and others have shown it’s possible to spend much less and still get very good results.
Nonetheless, the challenge as a management team is picking the right tools to deliver the best results and mitigate risks. For example, the convenience of buying tools off the shelf must be balanced with data privacy, security, and customer concerns.
The integration challenge also depends on what your tech stack looks like. If it's a legacy system, it's going to be harder than if a business is cloud-native. Selecting the right tool and tech stack requires some technical expertise, which is why AI should initially be led by the CTO and data science team, while it becomes established. Then, as maturity increases, it can be decentralised.
What challenges are investors and management teams facing in organisational readiness, and how can they become more prepared for AI adoption?
Adopting AI successfully requires a strong commitment, and it must be driven from the C-level. It requires a mindset change across the entire business; if you want employees to preach to customers about the benefits of AI, they need to be using it.
Some companies have a team in charge of AI, but if you want to drive change at scale, the whole organisation needs to adopt it for internal use cases, such as producing marketing content or checking legal terms and conditions. Currently, there are very few companies driving AI as a cross-functional, company-wide strategic initiative.
AI adoption is easier for a tech-savvy company because a large part of the organisation will be tech-savvy. It’s more challenging when a company is not purely technology-focused; however, tools are being developed to make it easier by providing a wrapper around OpenAI or Microsoft models, simplifying them, and aiding governance.
Where should companies start in their AI adoption journey? What are the easy-win use cases?
I'm a big fan of experimenting with internal use cases first because the risk is lower. Every company should test simple, benign use cases, such as creating an internal AI assistant based on a foundational model. This enables a company to build confidence, develop a user base, and start tracking response quality.
AI often falls short not because it lacks the capability, but because it's launched as a siloed initiative. For example, the sales team uses it for sales emails, but it doesn’t work because it hasn’t been properly tuned. Small, internal, benign use cases help change the mindset and gradually build confidence.
What about ethics and data privacy?
There are multiple concerns around ethics, and it’s an ongoing debate; nobody has all the answers yet. You can argue that singularity (where computers are so smart that they can outthink humans) is a good thing, but then there are concerns around what it means if people aren’t ultimately in control. Keeping a human in the loop ( a solution listed in the EU AI Act) is one answer, but it comes with implementation and operational challenges. The other potential problem is bias due to incomplete or biased data.
Another issue raised by AI governance, safety, and privacy expert Katalina Hernández is that as AI becomes more intelligent, humans will become less intelligent because we'll outsource everything to the machine. Hernandez also raises concerns about the lack of transparency in how AI makes decisions and ensuring that people can analyse and question its recommendations. These are all critical questions to consider as AI develops, and regulations, such as the EU AI Act, are already attempting to address them.
AI is no longer a future consideration; it’s a present imperative. For private equity investors and their portfolio companies, the question isn’t if AI will impact value creation, but how fast and how deeply. From assessing disruption risks and building internal capabilities to embedding AI into products and processes, success will hinge on strategic clarity, cultural readiness, and leadership alignment.
The winners in this new wave of transformation won’t just be those who adopt AI, but those who understand how to deploy it with purpose, discipline, and vision.
Let’s Talk AI, Leadership, and Value Creation
At The LCap Group, we partner with investors and operators to build future-ready leadership teams equipped to navigate disruption and unlock growth. Whether you’re evaluating a new investment, scaling a tech-enabled business, or transforming your leadership strategy, our frameworks and expertise ensure you’re not just reacting to change, but leading it.
Get in touch today to explore how DRAX and DRAX Affinity can support your AI and leadership journey. Let’s build long-term value, together.
Whether you're an investor, investee or a part of a leadership team seeking to increase value creation, our strategic consultants are on hand to guide you through your leadership journey. Contact us today.