Phillip Burr, Head of Product at Lumai – Interview Series

Phillip Burr is the Head of Product at Lumai, with over 25 years of experience in global product management, go-to-market and leadership roles within leading semiconductor and technology companies, and a proven track record of building and scaling products and services. Lumai is a UK-based deep tech company developing 3D optical computing processors to accelerate […] The post Phillip Burr, Head of Product at Lumai – Interview Series appeared first on Unite.AI.

Apr 25, 2025 - 18:45
 0
Phillip Burr, Head of Product at Lumai – Interview Series

Phillip Burr is the Head of Product at Lumai, with over 25 years of experience in global product management, go-to-market and leadership roles within leading semiconductor and technology companies, and a proven track record of building and scaling products and services.

Lumai is a UK-based deep tech company developing 3D optical computing processors to accelerate artificial intelligence workloads. By performing matrix-vector multiplications using beams of light in three dimensions, their technology offers up to 50x the performance and 90% less power consumption compared to traditional silicon-based accelerators. This makes it particularly well-suited for AI inference tasks, including large language models, while significantly reducing energy costs and environmental impact.

What inspired the founding of Lumai, and how did the idea evolve from University of Oxford research into a commercial venture?

The initial spark was ignited when one of the founders of Lumai, Dr. Xianxin Guo, was awarded an 1851 Research Fellowship at the University of Oxford. The interviewers understood the potential for optical computing and asked whether Xianxin would consider patents and spinning out a company if his research was successful. This got Xianxin’s creative mind firing and when he, alongside one of Lumai’s other co-founders Dr. James Spall, had proven that using light to do the computation at the heart of AI could both dramatically boost AI performance and reduce the energy, the stage was set. They knew that existing silicon-only AI hardware was (and still is) struggling to increase performance without significantly increasing power and cost and, hence, if they could solve this problem using optical compute, they could create a product that customers wanted. They took this idea to some VCs who backed them to form Lumai. Lumai recently closed its second round of funding, raising over $10m, and bringing in additional investors who also believe that optical compute can continue to scale and meet increasing AI performance demand without increasing power.

You’ve had an impressive career across Arm, indie Semiconductor, and more — what drew you to join Lumai at this stage?

The short answer is team and technology. Lumai has an impressive team of optical, machine learning and data center experts, bringing in experience from the likes of Meta, Intel, Altera, Maxeler, Seagate and IBM (along with my own experience in Arm, indie, Mentor Graphics and Motorola).  I knew that a team of remarkable people so focused on solving the challenge of slashing the cost of AI inference could do amazing things.

I firmly believe that future of AI demands new, innovative breakthroughs in computing. The promise of being able to offer 50x the AI compute performance as well as cutting the cost of AI inference to 1/10th compared to today’s solutions was just too good an opportunity to miss.

What were some of the early technical or business challenges your founding team faced in scaling from a research breakthrough to a product-ready company?

The research breakthrough proved that optics could be used for fast and very efficient matrix-vector multiplication. Despite the technical breakthroughs, the biggest challenge was convincing people that Lumai could succeed where other optical computing startups had failed. We had to spend time explaining that Lumai’s approach was very different and that instead of relying on a single 2D chip, we used 3D optics to reach the levels of scale and efficiency. There are of course many steps to get from lab research to technology that can be deployed at scale in a data center. We recognized very early on that the key to success was bringing in engineers who have experience in developing products in high volume and in data centers. The other area is software – it is essential that the standard AI frameworks and models can benefit from Lumai’s processor, and that we provide the tools and frameworks to make this as seamless as possible for AI software engineers.

Lumai’s technology is said to use 3D optical matrix-vector multiplication. Can you break that down in simple terms for a general audience?

AI systems need to do a lot of mathematical calculations called matrix-vector multiplication. These calculations are the engine that powers AI responses. At Lumai, we do this using light instead of electricity. Here's how it works:

  1. We encode information into beams of light
  2. These light beams travel through 3D space
  3. The light interacts with lenses and special materials
  4. These interactions complete the mathematical operation

By using all three dimensions of space, we can process more information with each beam of light. This makes our approach very efficient – reducing the energy, time and cost needed to run AI systems.

What are the main advantages of optical computing over traditional silicon-based GPUs and even integrated photonics?

Because the rate of advancement in silicon technology has significantly slowed, each step up in performance of a silicon-only AI processor (like a GPU) results in a significant increase in power. Silicon-only solutions consume an incredible amount of power and are chasing diminishing returns, which makes them incredibly complex and expensive. The advantage of using optics is that once in the optical domain there is practically no power being consumed. Energy is used to get into the optical domain but, for example, in Lumai’s processor we can achieve over 1,000 computation operations for each beam of light, every single cycle, thus making it very efficient. This scalability cannot be achieved using integrated photonics due to both physical size constraints and signal noise, with the number of computation operations of silicon-photonic solution at only at 1/8th of what Lumai can achieve today.

How does Lumai’s processor achieve near-zero latency inference, and why is that such a critical factor for modern AI workloads?

Although we wouldn’t claim that the Lumai processor offers zero-latency, it does execute a very large (1024 x 1024) matrix vector operation in a single cycle. Silicon-only solutions typically divide up a matrix into smaller matrices, which are individually processed step by step and then the results have to be combined. This takes time and results in more memory and energy being used. Reducing the time, energy and cost of AI processing is critical to both allowing more businesses to benefit from AI and for enabling advanced AI in the most sustainable way.

Can you walk us through how your PCIe-compatible form factor integrates with existing data center infrastructure?

The Lumai processor uses PCIe form factor cards alongside a standard CPU, all within a standard 4U shelf. We are working with a range of data center rack equipment suppliers so that the Lumai processor integrates with their own equipment. We use standard network interfaces, standard software, etc. so that externally the Lumai processor will just look like any other data center processor.
Data center energy usage is a growing global concern. How does Lumai position itself as a sustainable solution for AI compute?

Data center energy consumption is increasing at an alarming rate. According to a report from the Lawrence Berkeley National Laboratory, data center power use in the U.S. is expected to triple by 2028, consuming up to 12% of the country’s power. Some data center operators are contemplating installing nucleus power to provide the energy needed. The industry needs to look at different approaches to AI, and we believe that optics is the answer to this energy crisis.

Can you explain how Lumai’s architecture avoids the scalability bottlenecks of current silicon and photonic approaches?

The performance of the first Lumai processor is only the start of what is achievable. We expect that our solution will continue to provide huge leaps in performance: by increasing optical clock speeds and vector widths, all without a corresponding increase in energy consumed. No other solution can achieve this. Standard digital silicon-only approaches will continue to consume more and more cost and power for every increase in performance. Silicon photonics cannot achieve the vector width needed and hence companies who were looking at integrated photonics for data center compute have moved to address other parts of the data center – for example, optical interconnect or optical switching.

What role do you see optical computing playing in the future of AI — and more broadly, in computing as a whole?

Optics as a whole will play a huge part in data centers going forward – from optical interconnect, optical networking, optical switching and of course optical AI processing. The demands that AI is placing on the data center is the key driver of this move to optical.  Optical interconnect will enable faster connections between AI processors, which is essential for large AI models. Optical switching will enable more efficient networking, and optical compute will enable faster, more power-efficient and lower-cost AI processing.  Collectively they will help enable even more advanced AI, overcoming the challenges of the slowdown in silicon scaling on the compute side and the speed limitations of copper on the interconnect side.

Thank you for the great interview, readers who wish to learn more should visit Lumai.

The post Phillip Burr, Head of Product at Lumai – Interview Series appeared first on Unite.AI.