Brillig Understanding, Inc.

Large Language Models vs ChatScript technology

LLM's "understand" natural language better than ChatScript in ordinary chat-- but that does not mean they are good for everything.

Truth and Offense

ChatScript always says what you want it to say when a rule matches. LLM's can make up things. That may be fine for entertainment conversation but not be appropriate for business. And LLM's can give offense by insulting a user or offering opinions that the customer finds offensive. We use ChatScript to moderate input going to an LLM as well as output coming from it.

Footprint

ChatScript can fit in small memory, like even a smartwatch or Raspberry Pi. LLM's require massive memory..

Internet access

ChatScript does not use the internet, unless your script invokes some website. LLM's generally require internet connectivity. So you drive your car into a tunnel and suddenly your smart car goes dumb?

Dialog Management

ChatScript has a built-in dialog manager and you have complete control over flow. Writing a state machine in an LLM is extremely difficult.

Legal Issues

Using an LLM typically means sending your customer's information across the internet, exposing you to privacy issues. And we don't currently know how various copyright suits against LLM's will turn out.

Spoofing Issues

LLM's are vulnerable to prompt injection, where bad actors sneakily present the model with commands. In some examples, attackers hide prompts inside webpages the chatbot later reads, tricking the chatbot into downloading malware, helping with financial fraud or repeating dangerous misinformation.

Input Correction

ChatScript supports a range of abilities to adjust the input based on context. It can perform spellchecking even on misspelled words that themselves are legal words. LLM's work word by word moving forward, and wrong words may decoy them.

Sample LLM Guardrails

Imagine you have a business bot. There are some specific questions you want full control over the answer for. For that you use CS patterns, rejoinders, and responses. But you want this bot to be able to handle other things using an LLM. And you want to insure customers don't get offended. So if your specific questions don't handle the input, you would send it to the LLM. But you want guardrails. You write a pattern that won't go to the LLM in certain subject areas (eg religion, terrorism, politics). Assuming you proceed to LLM, then on return you call ^analyze on the answer and run more CS rules on that. Not only excluding answers in the areas you didn't want the LLM discussing, but even looking for insulting comments about the user. If the user had said "what do you think of my fashion sense", and the LLM had a snarky reply like "your fashion sense is useless", you probably want an insult filter like (you * be * ~negative_affect) and discard the LLM's answer.


Home About Us Technology Projects Testimonials ChatBot Demo Awards/Press Publications Contact