
RCM Managed Asset Portfolio - AI
RCM Managed Asset Portfolio
By Christopher Chiu, CFA
August 2025
Since AI is on the cusp of becoming a bigger part of our world, it may be useful to know how it works in simple terms.
The most available form of AI is the large language models (LLM) we use through Grok, Gemini, Chat GPT, etc. We ask the LLM a complex question and it responds with a coherent, even complex answer. How is this even possible? It must provide the right words at the right place. So the problem at the start is finding the right word or group of words to begin the response and then following with the correct sequence of words. And it does this with a response that is as short as a paragraph to as long as a page of text. This is not a small task. Imagine answers that get their subject wrong from the start or, if they get pass this first hurdle, are not grammatically correct. These kinds of answers don’t resemble intelligent responses and will be a turn off to the human questioner. So to pass the test of being able to communicate with humans, the right words to start off and then a sequence of words that follow must be correct.
AI Training
The first challenge for the AI is getting the subject correct when a question is asked. The answer needs to correspond to the question. As with search, the LLM of an AI will have indexed a good deal of what is available on the Internet and found what is likely to be the most common patterns of language. With search when you type in “Greek vacation”, the search engine will provide you with websites of hotels, tours, travel sites. These search results are websites that have cited aspects categorized as relevant to Greek trips. The way the search engines have done that is by crawling many, many websites and then indexingthem—that is, counted the instances when a word was used and how it was used. The search results will be product of the frequency with which the word searched appears and most importantly the way it is relevance to the query.
Likewise, with AI, it looks for words or groups of words that are associated with the words in your query. For example, given the prompt, “The sky is,” the model will look for the kind of situations that involve discussion of the sky—such as appearances, climate, even poetry. The “sky” is most often used the context of the weather. Therefore, the model might assign high probabilities to words like “blue” (0.7), “clear” (0.15), or “cloudy” (0.1), effectively ranking them.
How does it rank
How does the AI rank the possible answers and give the correct one? Here, it might be useful to compare the ranking process of AI to the way code breakers decipher an encrypted message. If you have ever read the Sherlock Holmes story “Dancing Men” or seen the movie Zodiac, you will remember the protagonist in each of these stories is presented with a coded message by the criminal. The way these messages are decrypted is always by looking for the most common symbol that appears in the message. Since the letter “E” is the most commonly used letter in the English language, the most common symbol in the encrypted message usually corresponds to the letter “E”. Then it follows that a consonant will commonly follow the vowel e, for example, consonants like "n," "s," "t," "d," or "r" are common in words like "pen," "yes," "set," "bed," and "her." Also, some of the same consonants will precede the vowel “e”. This is important because out of the many possibilities the consonant that will ultimately be chosen to decipher a word is the one that provides the best fit to spelling and meaning of the overall message. It’s not so different from trying different letters in an elaborate crossword puzzle, except once you know the letter a symbol likely stands for, it will help with decoding the rest the message. The decrypter will try different combinations with the right combination providing the greatest coherence to the message overall. By process of elimination and resolving upon the letters that create the best fit, the message can be decoded. It is labor-intensive work, but it is not impossible.
This is an oversimplification, but large language models work in a comparable way, only instead of letters they do so with words or groups of words called tokens. And where we intuitively use an understanding of the spelling and meaning to find a fit, they use probability from the data they have crawled and indexed and ranked. When the words written by the LLM are in the correct sequence, it is because they already have a probabilistic map of how all words or groups of words should follow and precede one another based on how they have appeared in context on the Internet. The creation of this probabilistic map of ranked words fitted to respond to a particular query is called AI training, especially for responses to the most commonly asked questions. And that probability is based on what currently exists, which is why the LLM will easily provide a response to a query by replicating an answer that currently exists but it will be harder to provide an answer to an uncommon question that no one has ever written about and published on the Internet.
Inference
To answer uncommon questions, these LLMs may have to marry a set of narrowed possibilities with yet other data and find the word that provides the best response given the overall context. So, if the discussion of the sky is the uncommon question, "How will atmospheric gravity waves affect tomorrow's weather?" The AI might actually have to marry its training with data related to atmospheric dynamics to find the most appropriate answer. This is an example of inference, where there is a training model used to make predications or make decisions based on data that has not been published previously.
This has been just a brief description of how the large language model arrives at an answer. You can imagine with the variety of queries the amount of different combinations that must be tried with different buckets of information by process of elimination before answers are arrived at that pass the threshold of being considered an intelligent response. It’s an enormous amount of work. But the prize is an agent intelligent enough to respond to almost every question that is often sked and even some that are only just being imagined. And this explains why there has been such a large amount of spent by the AI companies pursuing this very goal.