October 6, 2025

Scientist AWS: your AI strategy needs mathematical logic

0
Byron-headshot-horizontal.jpg



The hallucination is fundamental for the operation of linguistic models based on transformers. In fact, it is their biggest asset: it is the method by which language models find links between sometimes disparate concepts. But hallucination can become a curse when language models are applied in areas where truth counts. Examples range from questions about code health care policies that correctly use third -party APIs. With an agentic AI, the stakes are even higher, because autonomous robots can take irreversible measures – such as sending money – on behalf of our name.

The good news is that we have methods to ensure that AI systems follow the rules, and the underlying engines of these tools also evolve dramatically each year. This AI branch is called automated reasoning (A / K / A symbolic AI) which symbolically seeks evidence of mathematical logic to reason on the truth and falsity which result from policies defined axiomaticly.

It is important to understand that we are not talking about probability or better assumptions. Instead, these are rigorous evidence found in mathematical logic via algorithmic research. The symbolic AI uses the foundations initially presented by predecessors such as Aristotle, Bool and Frege – and developed in modern times by great minds like Claude Shannon and Alan Turing.

Automated reasoning is not only theory: in fact, it benefits from a profound adoption of the industry

In the 1990s, he started with evidence of low -level circuits in response to the FDIV buckt. Later, he was in critical security systems used by Airbus and NASA. Today, it is more and more deployed in the cases of neurosymbolic. Leibniz Ai, for example, applies formal reasoning in AI for the legal field, while Atalanta applies the same ideas to government contract problems, and the Deepmind AlphaProof system does not generate false arguments in mathematics because it uses the Protocol of the lean theorem.

The continuous list: The Imanda Codelogian does not allow the programs to be synthesized, which would violate the rules of use of the API because it also uses automated reasoning tools. Amazon’s automated reasoning function in the Rocky Substratum Roomous Substratum railing from false instructions using automated reasoning as well as axiomatic formalizations that can be defined by customers. For organizations that seek to increase their work with AI while having confidence in its results, the logical deduction capacities of automated reasoning tools can be used guarantee that interactions live in defined constraints and rules.

A key characteristic of automated reasoning is that he admits “I don’t know” when he cannot prove a valid response, rather than making information. In many cases, the tools can also indicate the contradictory logic which cannot prove or refute a declaration with certainty, and show the reasoning behind the determinations.

Automated reasoning tools are also generally inexpensive to operate, in particular compared to tools based on a powerful power transformer. The reason is that the automated reasoning tools only work symbolically On what is true and false. They do not “crack”, and there are no matrix multiplications on the GPUs. To see why, think of problems like “solving for x” of your math lessons at school. When we rewrite X + Y to Y + X, or X (Y + Z) to XY + XZ, we reason on the infinity while doing only a few simple steps. These steps are easily carried out in milliseconds on a computer.

It is true that the application of mathematical logic is not a universal solution to all the problems of AI. For example, we would be doubtful of an axiomatization of what makes a song or a poem “GOOD”. We also question the tools that claim to prove in mathematical logic that our home furnace will not break. But in applications where we can axiomatically define all true and false declarations in a given area (for example, eligibility for the law on family medical leave or the correct use of a software library), the approach offers a practical means of deploying AI safely in the commercial critical fields where the accuracy is paramount.

To start

Although automated reasoning tools have historically required deep mathematical expertise to use, the growing power of generative AI makes them more and more accessible to a wider audience where users can express rules in natural language and automatically check AI outputs against these rules. In fact: many language models are formed on outputs of automated reasoning tools (often in combination with strengthening learning). The key is to start with clear use cases which can be defined with precision – think of things such as coding, HR policies and tax laws. It is also applicable in areas where verification is really important such as safety, compliance and cloud infrastructure.

Ahead

While we seek to integrate AI increasingly deep into our lives, the ability to verify the accuracy and veracity of their actions and their results will only become more critical. Organizations that invest in automated reasoning capacities will now be better placed to safely evolve the adoption of AI and agents while maintaining control and compliance. At your next AI strategy meeting, consider automated reasoning. It could be the key to the deployment of AI with confidence in your organization and for your customers.

The opinions expressed in the Fortune.com comments are only the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Global Forum fortune returns on October 26 to 27, 2025 in Riyadh. CEOs and world leaders will meet for a dynamic event only invitation that shapes the future of business. Request an invitation.


https://fortune.com/img-assets/wp-content/uploads/2025/09/Byron-headshot-horizontal.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *