return false; }
Cubie is designed so that your company's private data is never seen by or fed into an LLM.
Depending on your requirements, a range of data security options are available:
You can assign unique login credentials for each user, or sync Cubie with an existing authentication system.
Users will be able to:
When you ask a question about structured data, the LLM can be very good at knowing how to answer the question. It generates a set of instructions to follow (which can be technical code), but the answers themselves can be unreliable.
Cubie takes the instructions that an LLM provides and sends them to be executed by a compute service, which can be hosted in your cloud. All the calculations with your data are performed by this service; your data never has to be uploaded or exported.
Cubie offers a few advantages over ChatGPT (and its Code Interpreter component):
1. We built Cubie with customers who manage sensitive PII or legally-regulated data in mind. At no point is private data ever sent into an LLM prompt. All manipulations of private data are performed within a controlled compute environment that can be hosted by the data owner.
2. Cubie is not intended to drive traffic to our site. She's intended to add convenience for users, wherever they already are.
3. Cubie's architecture can support any of the open source LLMs being developed by the global community. As we evolve Cubie, we can ensure that she's using the latest and greatest and not locked into any vendor-specific LLM.
Cubie may be easy to install, but she packs quite a punch behind the scenes. Our proprietary approach to answering analytical questions produces higher consistency and reliability of results than just a simple LLM query.
At their core, LLMs are non-deterministic, which means that given the same question many times, they are liable to randomly produce different answers. This is not desirable when there is only one right answer. Additionally, they tend to produce hallucinations that are hard to distinguish from the truth, one of the major risk points of the technology.
Our approach combines LLM thinking with more deterministic functions. We've found that this reduces the variability of the overall response and allows the handling of more complex questions. (Our research in this area is ongoing.)