Listen to this ► How A ChatGPT-like AI Can “Read A September 17, 2023 HOW A CHATGPT-LIKE AI CAN “READ A MIND” IN 20 QUESTIONS. W Mind” In 20 Questions. hat if the next generation of Artificial Intelligence (AI) systems anticipate or just “know” what is on your mind—without any connections or wires to your brain. This next generation AI is already just about here, and it uses an algorithm born in the 1940s by scientists working on the atomic bomb. The power of AI in the future will be on reasoning through concepts presented in a prompt by “knowing” your intents. This process requires a deeper understanding of the intents in the mind of the user. Of course, LLM AI like ChatGPT in simple terms, statistical math to find an answer to your prompts and encode your prompts. Yet, what if we can give LLM AI a form of reasoning that leads to a proto-understanding. I call this Anticipatory AI with a log2(n) – n questions and log2(n) – n answers problem and with or without my research over the decades, it is on the path ahead. It is better you understand it today, than be surprised by it tomorrow. With this technology, the potentials are massive. It is this work and other work I have been doing in this area that motivates me to urge you to own your own AI. In many ways, AI will appear to be omniscient, omnipotent and omnipresent. To demonstrate how simple it may be for AI to know deeply what you are thinking is this fact: You have ~65,000 thoughts per day. 95% are the same as yesterday. Most are inner-critic repeated thoughts. Only ~5% are action thoughts. Thus, an AI platform with your context and data will be able to “know your mind” with a lot less work than most AI researchers have assumed. We can approach this by both “understanding” the 95% of repeating thoughts and the 5% of novel/action thoughts. The results are no less than astonishing and frightening (especially in the wrong hands). Many years ago, when I started experimenting with this technology, most of the advanced uses were just theoretical. This year, elements of this will be deployed to Brian Roemmele @BrianRoemmele · Follow You have ~65,000 thoughts per day. 95% are the same as yesterday. Most are inner-critic repeated thoughts. Only ~5% are action thoughts. But why ? Repetitive thinking and negative thoughts once served us for hypervigilant alertness. It now is the basis of most suffering. “understand” you better. Unlocking the Power of Inquiry: How Advanced AI is Revolutionizing Problem- Solving Reasoning In this fascinating member’s only journey through cutting edge AI, we dive deep into how leading AI systems will be using an intriguing, yet familiar method to “read your mind” and solve the most complex challenges. These AI systems are not just guessing, but systematically unraveling your thoughts to provide you with precise answers. Discover how this revolutionary approach will be transforming everything from tech support to online tutoring, and why it might just be the key to a future where AI understands us better than ever before. Are you ready to have your mind read by the AI? If you are a member, thank you. If you are not yet a member, join us by clicking below. START : EXCLUSIVE MEMBER-ONLY CONTENT. Logged in as: timwerndli | Profile | Logout Membership status: active QUESTIONS AND ANSWERS © 2023 Brian Roemmele Distribution is limited to members of Read Multiplex. Sharing permission is not granted. Listen to this ► W hat do you do when you don’t know the answer to a question? You find the answer. The invention of writing, the printing press, the book, the floppy disk and the Internet allow any properly equipped human to answer any question, or get a fairly good “feeling” of the potential answers. It turns out humans think in a “fuzzy” way. Our answers, as logical as they may seem sometime may have foundations in logic but they are fuzzy in the way they are translated to humans. Most humans chose not to speak in segments of facts connected together and if we do, we usually su ! er a life of loneliness. Humans use analogy and reference to express things. Many researcher and observers assume this is exformation (information to be discarded). However the things we use to present concepts, ideas, even commands have a multiplex quality to how it was said. Thus when we are asked a question it is quite natural for us to simultaneously decode the multiplex layers of context. Many times we will ask additional questions to full understand what was said or the command. However overtime humans learn by doing and tend not to ask rebound questions, we avoid prolix, we speak in shortened sentences and we take the sentences as a meta command or meta idea. For computers to solve the what I call the log2(n) – n paradox they need to deal with the Fuzziness of humans and bitterns out that it is not the “insolvable” problem that even learned experts suggest. This is based on the comparison of the growth rates of the two functions involved: log2(n) and n . The function log2(n) grows much slower than n . For large n , the value of log2(n) will be much smaller than n This is a fundamental concept in computer science, especially in the analysis of algorithms where binary search has a time complexity of O(log n) and linear search has a time complexity of O(n) . In this case n is the unknown idea a human may have in trying to know something or AI trying to “know ” what the human meant. When You Don’t Know Something, You Ask Questions In the early 1940s through the late 1950s, most Americans were glued to their radios and TVs to participate in the very popular “Twenty Questions” games. A player would think of something and participants would try to guess what they were thinking of. The participates would ask is it “Animal, Vegetable, Mineral?” usually to start. The game has a heritage that spans back to our earliest human history [1]. This is no surprise as it is a very e # cient way to learn. It also fostered deductive reasoning. One of the most powerful learning tool for humans. The early versions of twenty questions would use the basis of the Linnaean taxonomy of the natural world as a way to break down the problem of understanding what the player is thinking. This is a folk taxonomy, a vernacular naming system, and can be contrasted with scientific taxonomy. Humans commonly use folk taxonomies and ontologies in real life. There are many techniques and systems to solve the log2(n) – n paradox. There are no doubts a seres of systems working in parallel will coordinate a solution. I will cover this in more detail in the future, in this article I will focus on the twenty question model. The Twenty Question Model The twenty questions model is part of an abstract mathematical system called Rényi–Ulam [2] game using Shannon’s entropy statistics [3]. Shannon’s entropy statistics state that to identify an arbitrary object, it takes at most 20 bits. Each question is structured to eliminate half the objects, 20 questions will allow the questioner to distinguish between 220 or 1,048,576 objects. Accordingly, the most e ! ective strategy for twenty questions is to ask questions that will split the field of remaining possibilities roughly in half each time. The process is analogous to a binary search algorithm in computer science. However, in its true form and the way I have used this technique is not binary search. The Rényi-Ulam Game The Rényi-Ulam game, often referred to as “Twenty Questions” or “20 Questions”, is a classic problem in search theory and information theory. The game’s objective is to guess a number between 1 and N by asking yes/no questions, with the twist that the answerer may lie once. The game is named after mathematicians Alfréd Rényi and Stanislaw Ulam, who studied it extensively. “Knowing what is big and what is small is more important than being able to solve partial di ! erential equations” -Stanislaw Ulam The Rényi-Ulam game is a fascinating slice of mathematical history that has intrigued mathematicians, computer scientists, and game theorists for decades. Its origins can be traced back to the works of Hungarian mathematicians Alfréd Rényi and Stanislaw Ulam. The game was conceived independently by Rényi and Ulam. Rényi developed the game during his time at the Hungarian Academy of Sciences, while Ulam, a Polish mathematician, came up with a similar concept while working on the Manhattan Project in the United States. The basic premise of the game is simple. One player, the chooser, picks an object or a concept, and the other player, the guesser, has to determine what the chooser has picked by asking yes-or-no questions. The chooser may lie once during the game. The Rényi-Ulam game, however, adds a mathematical twist to it. The game is typically played with the chooser picking a number between 1 and n, and the guesser trying to figure out the number. The Rényi-Ulam game is a rich source of mathematical problems. The most well-known problem is determining the minimum number of questions needed to guarantee finding the chosen number, given that the chooser can lie once. This problem can be solved using methods from the field of information theory, such as binary search algorithms. Thus, here’s how it works in the simplest form of the game, where the chooser always tells the truth: 1. The guesser starts by asking whether the secret number is greater than or equal to some value x in the middle of the current range of possible values. For example, if the number is from 1 to 100, the guesser might start by asking if the number is greater than or equal to 50. 2. Depending on the answer, the guesser can eliminate half of the remaining possible numbers. If the answer is “yes”, the guesser knows the number is between x and the upper end of the range. If the answer is “no”, the guesser knows it’s between the lower end of the range and x. 3. The guesser repeats this process, each time halving the range of possible numbers, until they can guess the secret number with certainty. In an ideal scenario, the guesser can identify the number in log2(N) rounds, where N is the size of the number set. The mathematical framework of the Rényi-Ulam game involves concepts from game theory, search theory, and information theory. The aim is to minimize the number of questions asked to guess the number correctly, taking into account the possibility of a single deceptive answer. In the simplest case where the chooser cannot lie, the guesser can always determine the number in log(n) questions by simply bisecting the range of possible numbers. However, the possibility of one lie complicates the game significantly. The question of how many queries are necessary and su # cient to find the number with one allowed lie remained open for a long time, and various bounds were found. A naive approach to this problem would be to adopt a binary search strategy, dividing the search space by half with each question. However, the single lie or misunderstanding is permitted in the game renders this approach ine ! ective. The challenge lies in designing a questioning strategy that can identify and correct the single lie. To meet this challenge, a strategy known as the “Lie Detecting Strategy” was developed. This strategy involves repeating certain questions to identify inconsistencies in the answers which would signal a lie. After each round of questions, a consistency check is performed to ensure the answers are logically possible. If they are not, one of the answers is identified as a lie. The mathematical analysis of the game involves determining the optimal number of questions to ask and the best way to distribute these questions to minimize the average number of questions needed to guess the number correctly. The Rényi- Ulam game is a fascinating example of a problem in the realm of PSPACE, a class of problems that require a polynomial amount of memory to solve, but may need an exponential amount of time. The Rényi-Ulam game has attracted the interest of many mathematicians and computer scientists over the years. The problem has been generalized in many ways, for example by allowing the chooser to lie more than once, or to lie probabilistically. It has also been studied in the context of various other fields, such as coding theory and computer science. In the field of computer science, the Rényi-Ulam game has been used to illustrate concepts in algorithm design, particularly in the design of search algorithms. Moreover, the game has found applications in the design of fault- tolerant systems. These are systems designed to continue functioning even when parts of them fail. The concept of the chooser being able to lie once can be seen as a metaphor for system failure, and the strategies used to win the Rényi-Ulam game can be applied to design systems that can handle such failures. The Rényi-Ulam game, despite its simple rules, has proven to be a deep and fertile ground for mathematical and computational exploration. From its origins in the mid-20th century to its modern applications in diverse fields like information theory and computer science, the game continues to be a source of intriguing questions and innovative solutions. The twenty question model took to computers in 1998 when Robin Burgener wrote 20Q software. It was the first use of the Rényi–Ulam model with Shannon’s entropy Brian Roemmele @BrianRoemmele · Follow Einstein knows ! Watch on Twitter statistics AI. Burgener put the program on a floppy disk and sent it to all his friends to play. With every new game played, the 20Q AI learns a little bit more on top of the base knowledge base Burgener programmed. Guesses the player’s object incorrectly happened frequently throughout its early days, with the player typing in the correct answer. The correct answer would grow the knowledge set and it would become part of 20Q’s growing neural network. The system would grow rapidly as more people play, 20Q gets better and better at understanding how each object is characterized. In 1994, Burgener wrote a version of the game that could run on the internet, where it still operates today at 20q.net [4]. Tra # c to the site grew exponentially, and with it jumped 20Q’s ability to guess even the trickiest objects. Today over 87,800,000 games with completed answers or user supplied answers have been played and each time the neurons are confirmed or they grow with new neurons. The online version of 20Q is not only entertaining it is usually eerily accurate, and when it is not accurate, it is not afraid to learn. Of course the user does not need to teach the system, but 99.2345% the user will supply the correct answer for 20Q to learn. 20Q can deal with a wide range of questions that cover every element of human interactions: animal, vegetable, mineral, concept, unknown. After decades of working with the 20Q code Burgener stated: “20Q doesn’t think the way a human thinks, as a human being, our strategy tends to be get a vague idea of what it is, focus in on one object and try to prove or disprove 20Q does not follow a classic binary decision tree and thus answering a question incorrectly or lying (to a minimal degree) to the system early on will not throw it completely o ! . The neuron concept and AI at work will always consider every object in its knowledge base in addition to every answer you have provided, it will eventually figure out that one of the answers you gave doesn’t fit with the others. When an incorrect answer by the player is detected, usually by the sixth or seventh question, it doesn’t believe for example that it’s a vegetable anymore. It’ll ask you something very un-vegetable like “Does it have fur?” How Does 20Q Work? There is actually a great deal of science behind the neuronal AI 20Q uses. I will begin with the simple breakdown: Mass Elimination 20Q starts by defining the world into a mass ontologies animal, vegetable, mineral, concept, unknown. This process actually constrains the dataset significantly. Context it. " e 20Q AI, however, can consider every single object it knows simultaneously, so with each question you answer, certain objects become a little bit more likely to be what you’re thinking of, and certain objects become a little bit less likely. It then chooses a question that will cut the number of likely objects in half.” 20Q uses “cheats” about your context, it asks your stated sex, age and country of origin. This context also allows for constrains to the dataset significantly but also the weighting from prior game play on types of questions and likely answers. Corrections 20Q will make constant correction to the questions it will ask based on how it perceives the prior answers. It will go to a new premise if there are too many conflicts. Target Answer 20Q will eliminate as many possible answers and distill to one answer or a range of answers, usually no more than 5. It will draw from these results to end the game or learn. Learning 20Q is a learning system as much as it is an answering system. Thus as each question is represented it learns to reenforce the correct answer it achieved or to build a new neuron to the new answer supplied by the participant. In April, 2005 Burgener applied for a patent to the neuronal AI system he uses [5]: Burgener presents the details of how 20Q works and clearly why it is not just a simply binary tree system. The system uses a number of processes to engage nodes Artificial neural network guessing method and game Abstract A method for guessing, in an electronic game, an object that a user is thinking of, from a set of target objects, after asking the user at least one question, the method utilizing a neural network structured in a target objects-by-questions matrix format, wherein each cell of the matrix defines an input-output connection weight, and the neural network can be utilized in a first mode, whereby answers to asked questions are input nodes and the target objects are output nodes, and in a second mode, whereby the target objects are input nodes and the questions are output nodes, the method comprising the steps of ranking the target objects by utilizing the neural network in the first mode; ranking the questions by utilizing the neural network in the second mode; and providing a guess in accordance with the ranking of the target objects. for target elimination or confirmation. The patent is a treasure trove to how to arrive at a successful answer or to recover the successful from a user. There are other systems that can supplement the 20Q process, like soft decision trees, and can run in parallel with cross connections to inform and enforce the system. Cristina Olaru and Louis Wehenkel talked about another aspect of discovery called a soft decision trees or fuzzy trees in 2003 [6]. In their paper they speak to a concept that could reenforce the concepts in 20Q. In this paper, a new method of fuzzy decision trees called soft decision trees is presented. This method combines tree growing and pruning ,to determine the structure of the soft decision tree, with refitting and back fitting, to improve its generalization capabilities. In 2010 John T. Gill III, Member and William Wu [7] spoke to adding as an optimal solution to Twenty Questions. The caveat is that Twenty Questions games always end with a reply of “Yes,” whereas Hu ! man trees need not obey this constraint. They bring resolution to this issue, and prove that the average number of questions still lies between H(X) and H(X) + 1. They show that Hu ! man trees determine X, but do not specify X. Thus they can be used in parallel to a greater extent to inform a 20Q system. A derivative of the 20Q system is the Akinator [8]. The Akinator is more focused on people and thus is a deeply constraining knowledge set. Aubin La show in a paper the variants used by this system [9]. Akinator uses the program Limule published by Elokence. Akinator is similarly to a human being playing “Guess Who?”. The characters in “Guess Who?” can be compared to the database of Akinator which is of course a lot bigger. While playing “Guess Who?”, the player will first analyze its data, that is to say look at the specific characteristics of the characters. It is just like Akinator which has a filled character database, with many attributes for each entries. Then, the player has to decide which questions to ask to find the character to guess in the lowest number of questions. From Akinator’s point of view, it’s looking for the best question that will reduce the search space e # ciently. This corresponds to building a decision tree and searching it. Akinator uses systems that are similar to or are Naive Tree Algorithm to C4.5 and Naive Bayes algorithms. These systems present a more classical approach to AI in current systems. Of course these systems can also be uses in parallel to other systems to arrive at useful answers. Understanding The 20Q program systems and Akinator systems do not “understand” its own questions, it does not understand – as demonstrated in that if an object is straight it can not be round and that a vegetable can not be conscious. It simply tries to connect answers algorithmically, and that is not intelligence per se. However when used with other ML systems, the AI can learn the distinctions. 20Q is a terminal into these systems interacting with the human and the machine. If It Is Not Dumb, Is It Intelligent? Many in the AI research world and some very educated observers look at the problem of knowing every question and answering them accurately as an almost insurmountable general AI problem. Many use the Turing Test [10], also known as the Imitation Game as a measure of how Voice First AI is performing. The Turing Test was first proposed in 1950 by Alan Turing in the paper “Computing machinery and intelligence”. Turing is commonly acknowledged as the father of artificial intelligence and computer science and later developed the Imitation Game as a substitute for the question “Can machines think?”. The Turing test is interesting but is not the baseline of how future Voice First systems will deal with the log2(n) – n questions and log2(n) – n answers almost seemingly impossible task. The Turing test starts out with an invalid premise for modern AI systems: to fool the user into thinking they are truly talking to a human. AI systems do not need to achieve this ultimate state. Questions however, human-like learning by seeking more information from the questions humans present is very useful. Above I presented a simple way, but not the only way, this could take place today and solve contextually to the user the log2(n) – n questions and log2(n) – n answers problem. Many have already come to believe ChatGPT-4 is close to being able to pass the Turning Test. This is true to some practical level. Moreover, Turing’s intentions were never to use his test as a way to measure the intelligence of AI programs, but rather to provide a coherent example to help arguments regarding the philosophy of artificial intelligence. On the other hand, the 20Q programs work very algorithmically where answers can be obtained after log2(n) questions, for n words. Thus, with 20 questions one can discover 202 (one million) words by simply using this algorithm, with more than 800 million games played at the time of writing the program is able to optimize this algorithm multiple times. Burgener’s program was able to develop “synaptic connections”. Recently the development of neural networks is tending towards biophysical models like the Bienenstock-Cooper-Munro (BCM) theory. It has built upon the 20Q and creates a model in which it can create a system with reasoning capabilities. Furthermore, there is also research to try and understand the computational algorithms used in the human brain. Thus, the new era of neural computing will be focused more on learning rather than programming. Intelligence is defined as the “capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc”. John Searle argues that the programs that attempt to pass the Turing test are “simulating a human ability” and although it may indicate some level of operational intelligence it does not show that the program has the capacities of reasoning, understanding, grasping truths or meanings, among other definitions of intelligence. Furthermore, Huma Shah and Kevin Warwick suggest that interrogators are fooled by the program into the concept of intelligence rather than facing real intelligence. The same principles apply for the 20 Questions Game, which does not exhibit most of the definitions of intelligence besides the fact that it can “learn” from previous answers. Self-Learning Ayan Acharya, Raymond J. Mooney, Joydeep Ghosh presented a great paper [11] on active learning. This paper did not use the 20Q modality as an input, however it demonstrates a bridge to how active learning can work in Voice First systems. The paper introduced two new models for active multitask learning. Experimental results comparing to six di ! erent ablations of these models demonstrate the utility of integrating active and multitask learning in one framework that also unifies latent and supervised shared topics. One could additionally actively query for rationales and further improve the predictive performance. The computational complexity of the proposed models largely depends on the active selection mechanism adopted. For large scale applications, one needs to use better approximation techniques for active selection. Although twenty questions may seem tedious, it turns out in 76% of the times it requires far less to arrive at a target in my AI research. This process all may seem very familiar to us, it is how we teach a child, it is how we learned. We learn by asking questions and forming a contextual paradigm of the world around us. Current LLM systems not only do not maintain evergreen, long tail contextual data on the user, they do not employ a 20Q type system to learn. There are other feed mechanisms like user data streams, GPS, address books, and other databases, with permission to aid in context. This is actually a rather good artifact, as the current systems are woefully inadequate to secure this level 1 data. The game’s mathematical framework can be used to design better question- answering strategies and anticipate user questions more e ! ectively. ANTICIPATION OF QUESTIONS The Rényi-Ulam game focuses on anticipating the answerer’s behavior to optimize the questioning strategy. This can be transferred to LLMs to improve their ability to anticipate user questions. The model can be trained to predict the next user input based on the current conversation context, thereby enabling it to provide more e ! ective responses. BUILDING BETTER ANSWERS The game’s strategy of asking questions and performing consistency checks can be used to improve the quality of LLM responses. The model can be designed to ask clarifying questions when the user input is ambiguous or unclear, and then use the user’s responses to these questions to provide a more accurate answer. HANDLING OF FALSE INFORMATION Just as the Rényi-Ulam game involves dealing with the possibility of a lie, LLMs often have to contend with false or misleading information. The game’s strategies can be used to design LLMs that are better equipped to identify and handle such information. EXAMPLES OF APPLICATION Now, let’s illustrate these concepts with some examples. For the sake of brevity, we’ll limit ourselves to five examples. 1. Chatbot Conversation Contextualization : An AI chatbot can use the principles of the Rényi-Ulam game to better understand the context of a conversation. For instance, if a user asks, “What’s the weather like?”, the AI can ask a follow-up question, “Where are you currently located?”, to provide an accurate response. 2. Ambiguity Resolution : If a user asks a question that could have multiple interpretations, the AI can ask clarifying questions. For instance, for the question, “How tall is he?”, the AI can ask, “Who are you referring to?”.