• The Angle
  • Posts
  • FalkorDB’s GraphRAG to the Rescue!

FalkorDB’s GraphRAG to the Rescue!

The Angle Issue #230

FalkorDB’s GraphRAG to the rescue!
Gil Dibner

We are super excited to have backed Guy Korland, Roi Lipman, Avi Avni and the entire team at FalkorDB to build the future of real-world enterprise AI applications. A few days ago, the investment broke stealth mode and was covered by the press in both English by (Calcalist) and Hebrew (by Geektime).

Everyone knows that AI is important and that LLMs are very powerful. So every enterprise has already run off and tried to build enterprise-grade applications based on LLMs, mostly on-prem open source. As Bill Gurley recently said in the BG2Pod, “LLMs may not be the best tool for every use case, but we are definitely going to find out” because every founder and CIO on the planet is busy throwing LLMs at every imaginable problem. That is actually long-term great for innovation, but it's short-term confusing.

In many ways, however, the results of this experiment are already becoming clear to those on the cutting edge of these endeavors: we are all gradually coming to the same conclusions: LLMs are great at some things, but they simply can not serve as the backbone of real-world enterprise (or consumer) applications. To do that, they must be married to a traditional (human readable) database - ideally a Graph Database.

LLMs excel at two things. First, they enable powerful bi-directional human interfaces: natural language in and easy-to-consume natural language out. Second, they are outstanding at unifying disparate data sources: LLMs are great at enabling vector-based similarity searches across vast stores of unstructured information, and they are also powerful tools for fusing data together, including structured and unstructured data.

But LLMs are not great for reasoning, and they are very difficult for humans to understand, debug, control, and limit. The use of LLMs (as any other technology) brings with it a host of issues that need to get solved: error checking, debugging, permissions, security, audit, etc. Hallucination - the tendency of LLM-powered applications to emit nonsensical answers that “sound” truthy but are straight-up wrong - are the most famous bug of LLMs, and provide a great illustration of why traditional (i.e. human readable) databases need to be coupled with LLMs to produce reliable results which enable useful real-world applications. The same architecture can be used to implement other types of controls that will unleash the true potential ofLLMs. They will not be the solution, but they will be part of it.

Guy Korland and the team at FalkorDB understood this from day one. Having built RedisGraph (the most performant in-memory Graph Database in the world), there is no team better suited to solve this problem - which is arguably the biggest problem in enterprise data infrastructure today: how do we truly democratize the creation of reliable, debuggable, auditable, useful, scalable, secure, enterprise-grade AI applications that can leverage the powerful novel capabilities of LLMs without sacrificing the tremendous advantages of traditional databases?

The answer to the limitations of LLMs is FalkorDB. And the key insight is that no enterprise user ever wanted a vector database or an "LLM." What they wanted was a knowledge base, which is the core benefit that FalkorDB’s approach enables. FalkorDB achieves this by creating an architecture that fuses knowledge graphs (KGs) and LLMs together into one seamless technology stack, something they call GraphRAG (graph-retrieval-augmented generation).

According to a recent blog post by FalkorDB, “Knowledge Graphs are a powerful tool for representing and querying structured data. They can capture the relationships and attributes of entities, such as people, places, events, products, etc. Knowledge Graphs can also enrich the data with external sources, such as Wikipedia, DBpedia, or other domain-specific ontologies….Knowledge Graphs and LLMs can work together to create a powerful synergy. By combining Knowledge Graphs and LLMs, we can leverage the strengths of both approaches and overcome their weaknesses. We can use Knowledge Graphs to provide structured and factual information to LLMs, and use LLMs to provide natural language generation and understanding capabilities to Knowledge Graphs.” GraphRAG can become a powerful tool across a range of use cases, but it uses Knowledge Graphs not only as an input or a source of information for LLMs, but also as an output or a target of information for LLMs. “In other words, GraphRAG uses LLMs not only to generate natural language texts from Knowledge Graphs, but also to generate Knowledge Graphs from natural language texts.”

It’s rare indeed to come across an experienced team that has worked together in the past to build and deliver best-of-breed fundamental technology. It’s equally rare to come across a team that has the unique perspective necessary to re-imagine the way things ought to be done in one of the most important areas of human endeavor. Amazingly, in FalkorDB, we found both in one company. We are incredibly excited to be joined in our investment in FalkorDB by Keenan Rice of Tokyo Black, Jerry Dischler (President, Cloud Applications at Google), Aryeh Mergi (co-founder of M-systems and XtreamIO), and Eldad Farkash and Saar Bitner (CEO and COO of Firebolt). We can’t wait to see what the FalkorDB team will deliver to the world in the near future, and - even more importantly - what this fundamentally novel approach to AI tooling will enable people to build as it unlocks the next level of value creation in the evolution of enterprise AI.


Revenue Durability in the LLM World
Everything about LLMs seems to make revenue durability more challenging than ever.

A Digital Fabric for Maritime Trade
Why we invested in Portchain.

Three Keys to the Kingdom
The sometimes-competing and sometimes-aligned goals that early-stage founders must manage.


Israel / Cybersecurity. Entro Security raised $18M, led by Dell Technologies Capital, for its non-human identity and secrets management platform.

Israel / SaaS. PointFive raised $16M, led by Index Ventures, for its end-to-end cloud cost optimization platform.

UK / LegalTech. Wordsmith raised $5M, led by Index Ventures and General Catalyst, for its “lawyer-in-the-loop,” generative AI-powered legal platform that hopes to augment the work of in-house legal teams.

Switzerland / LegalTech. DeepJudge raised $10.7M, led by Coatue, for its search platform specifically focused on legal domains.

Germany / SaaS. Atmio raised €5.1M, led by Notion Capital, for its operating system for finding, fixing and reporting methane emissions.

UK / SaaS. Scoop raised $3.5M, led by Ridge Ventures, for its decision-making support platform, with a UI inspired by a spreadsheet.



New Titan of Tech. Last Tuesday, Nvidia leapfrogged Apple and Microsoft to become the most valuable public company in the world. Nvidia’s ascendance just reinforces the impact of the generative AI boom over the past year and the demand for Nvidia’s GPUs. For a sense of scale, just two years ago, the company was worth $400 billion. As of last Tuesday, the company was worth over $3 trillion.

Safe Superintelligence. Ilya Sutskever, cofounder and former chief scientist of OpenAI, and member of the crew that ousted Altman as CEO back in 2023, has announced his latest company - Safe Superintelligence. Safe Superintelligence will be focused on building superintelligence that is designed not to cause harm, rather than “short-term commercial” gains. Deep dive on what we know about the company so far here.

Apple vs. EU. Last Friday, Apple announced that they would not be launching their new artificial intelligence features in the European Union because of concerns around the Digital Markets Act (DMA). The question is what kind of impact will this decision have on European regulators, if any? And will this stand by Apple lead to any sort of outcry by the European public? Some have started arguing that Europe is falling behind US and China tech sectors in large part due to these sorts of regulations.


The Bullshit Machine. A provocative title for this latest deep dive from Wired, which reviews research from developer Robb Knight that suggests that Perplexity has been ignoring the Robots Exclusion Protocol (more widely known as robots.txt), and hallucinates (“bullshits”) a significant amount. From the Wired article: “The results showed the chatbot at times closely paraphrasing WIRED stories, and at times summarizing stories inaccurately and with minimal attribution. In one case, the text it generated falsely claimed that WIRED had reported that a specific police officer in California had committed a crime. (The AP similarly identified an instance of the chatbot attributing fake quotes to real people.) Despite its apparent access to original WIRED reporting and its site hosting original WIRED art, though, none of the IP addresses publicly listed by the company left any identifiable trace in our server logs, raising the question of how exactly Perplexity’s system works.”

This question on data use is a fundamental one for new AI companies. Read more from Ben Thompson at Stratechery about Perplexity and robots.txt (and Perplexity’s defense) here. 

Database Wrapper Applications. A point worth considering, from Brian Halligan, founder and CEO of Hubspot:

AI price wars. At the start of 2023, experts said that Chinese LLMs were “a decade behind the American cutting edge.” That seems to have been obviously wrong. Or at least it changed quite quickly. There are now over 100 Chinese models boasting more than a billion parameters with no clear dominant player, which has led companies to compete primarily on price. Why does this matter? It shows just how quickly models depreciate in value. This should also be a warning shot to US companies. ByteDance kicked off this price war by pricing access to its LLMs at 99.8% below GPT-4. When will this battle impact US and European markets?


What if no IPO? If, as Philippe Laffont from Coatue argued, $10B enterprise value is the threshold for going public (as discussed on the latest Bg2 Podcast), then what does that mean for venture returns? Jai Das from Sapphire Ventures suggests the threshold is a bit lower, but still quite high: $500M in revenue and growing at minimum 20%. The problem of course, is not many of the 1400 private unicorns are close to reaching either threshold. So what then? Is PE the answer? So suggests this latest article from The Information, which highlights that many PE firms are canvassing the market for companies with less than $100M ARR as potential roll-ups into portfolio companies they already acquired. Of course, acquisitions from bigger fish are, as always, also on the table, as evidenced by recent reporting that the HuggingFace CEO is fielding tens of meeting requests per week from hopeful founders and investors.

AIs $600B Question. David Cahn from Sequoia wrote an excellent piece this past week in which he tries to do the math on how much revenue AI will need to generate to justify the CapEx currently being invested by VCs and hyperscalers alike. This is a follow up to a piece he wrote last year titled “AI’s $200B Question.” That number, following a year of intense investment, has ballooned to $600B. Where will that revenue come from? Surely AI will produce a lot of end consumer value. But there are some aspects of this build-out of infrastructure that make Cahn question the ability for AI vendors to capture a lot of the value they’ll be creating. One interesting point he highlights, related to an article linked above, is the problem of pricing power. Previous infrastructure build-outs throughout history - like the railroads or the Internet - bestowed upon their investors monopolies. There can only be so many railroad tracks or broadband lines. That’s not true for AI infrastructure. As Cahn writes: “Without a monopoly or oligopoly, high fixed cost + low marginal cost businesses almost always see prices competed down to marginal cost (e.g., airlines).”


Januar cut EUR to Crypto trading fees by 55%.

Planable’s CMO Miruna Dragomir shared in the latest ChurnFM podcast how to optimize user acquisition strategies for long-term customer retention with lessons from Uber and Planable.



or to participate.