Transparent AI for Mathematics: Transformer-Based Large Language Models for Mathematical Entity Relationship Extraction with XAI

This paper proposes a transparent AI framework for Mathematical Entity Relation Extraction (MERE) that utilizes BERT to achieve 99.39% accuracy in identifying mathematical relationships and incorporates SHAP-based explainability to enhance model trust and interpretability for applications in automated problem solving and education.

Tanjim Taharat Aurpa

Published 2026-03-09
📖 4 min read☕ Coffee break read

Imagine you have a robot that is incredibly smart at reading books, but when you hand it a math word problem, it gets confused. It sees words like "mangoes," "children," and "divided," but it doesn't quite understand that these words are actually hiding a secret math equation waiting to be solved.

This paper is about teaching that robot to not only read the words but to understand the math story behind them, and then to show us exactly how it figured it out.

Here is the breakdown of their work, explained simply:

1. The Big Idea: Math as a Relationship Game

Usually, when we read a sentence like "A man bought ten mangoes and divided them equally among five children," we see a story.
The researchers decided to treat this sentence like a relationship game.

  • The Characters (Entities): The "ten mangoes" and the "five children" are the players.
  • The Action (Relationship): The word "divided" is the action connecting them.

Their goal was to build a computer program that can automatically spot the characters and figure out what action is happening between them, turning a messy paragraph of text into a clean math equation.

2. The Star Player: BERT (The Super-Reader)

To do this, they used a famous AI model called BERT. Think of BERT as a super-advanced librarian who has read almost every book in the English language.

  • Because BERT has read so much, it understands context. It knows that "bank" means something different in a river vs. a money bank.
  • The researchers took this super-reader and gave it a specific training course on math problems. They taught it to look for numbers and action words (like "plus," "minus," "root," "factorial") and predict the relationship.

The Result: This trained BERT model was a superstar. It got 99.39% of the answers right. That's like getting 99 out of 100 questions correct on a math test!

3. The Problem: The "Black Box" Mystery

Here is the tricky part. Even though the robot got the answer right, we didn't know why.
Imagine a student gets a math problem right. If they just say "I got it!" but refuse to show their work, you might not trust them. You'd wonder, "Did they guess? Did they cheat?"
In AI, this is called a "Black Box." We see the input (the question) and the output (the answer), but the thinking process inside is hidden.

4. The Solution: XAI and SHAP (The Flashlight)

To fix the "Black Box" problem, the researchers used a tool called SHAP (which stands for Shapley Additive Explanations).

  • The Analogy: Imagine SHAP is a magnifying glass or a flashlight that shines on the robot's brain.
  • When the robot makes a prediction, SHAP highlights the specific words that made the robot say "Yes, this is Division!" or "No, this is Addition!"
  • Red highlights mean a word helped the robot make the right choice.
  • Blue highlights mean a word made the robot less sure.

What they found:

  • For the word "Square Root," the robot relied heavily on the words "root" and "square." It was very confident.
  • For "Addition," the robot looked at a mix of words like "total," "sum," and "plus." It had to gather clues from many places to be sure.

This proves the robot isn't just guessing; it's actually reading the math story and understanding the clues, just like a human would.

5. Why This Matters

Why should we care about a robot that solves math word problems?

  • Smart Tutors: Imagine an app that doesn't just give you the answer, but explains why it's the answer, helping students learn better.
  • Research Assistants: Scientists could use this to scan thousands of research papers and automatically pull out all the math formulas and relationships, saving hours of work.
  • Trust: Because they used the "flashlight" (SHAP), we can trust the robot. We know it's not making up answers; it's following the logic of the text.

Summary

In short, the researchers built a math-savvy robot (using BERT) that can read word problems and find the hidden math equations. They then built a flashlight (using SHAP) to show us exactly which words the robot used to solve the puzzle. This makes the robot not only smart but also trustworthy and transparent, paving the way for better educational tools and automated math systems in the future.