Humans make nearly 35,000 decisions every day, from whether it’s safe to cross the road to what to have for lunch. Every decision involves weighing the options, remembering similar past scenarios, and feeling reasonably confident about the right choice. What may seem like a snap decision actually comes from gathering evidence from the surrounding environment. And often the same person makes different decisions in the same scenarios at different times.
Neural networks do the opposite, making the same decisions each time. Now, Georgia Tech researchers in Associate Professor Dobromir Rahnev’s lab are training them to make decisions more like humans. This science of human decision-making is only just being applied to machine learning, but developing a neural network even closer to the actual human brain may make it more reliable, according to the researchers.
In a paper in Nature Human Behaviour, “The Neural Network RTNet Exhibits the Signatures of Human Perceptual Decision-Making,” a team from the School of Psychology reveals a new neural network trained to make decisions similar to humans.
Decoding Decision
“Neural networks make a decision without telling you whether or not they are confident about their decision,” said Farshad Rafiei, who earned his Ph.D. in psychology at Georgia Tech. “This is one of the essential differences from how people make decisions.”
Large language models (LLM), for example, are prone to hallucinations. When an LLM is asked a question it doesn’t know the answer to, it will make up something without acknowledging the artifice. By contrast, most humans in the same situation will admit they don’t know the answer. Building a more human-like neural network can prevent this duplicity and lead to more accurate answers.
Making the Model
The team trained their neural network on handwritten digits from a famous computer science dataset called MNIST and asked it to decipher each number. To determine the model’s accuracy, they ran it with the original dataset and then added noise to the digits to make it harder for humans to discern. To compare the model performance against humans, they trained their model (as well as three other models: CNet, BLNet, and MSDNet) on the original MNIST dataset without noise, but tested them on the noisy version used in the experiments and compared results from the two datasets.
The researchers’ model relied on two key components: a Bayesian neural network (BNN), which uses probability to make decisions, and an evidence accumulation process that keeps track of the evidence for each choice. The BNN produces responses that are slightly different each time. As it gathers more evidence, the accumulation process can sometimes favor one choice and sometimes another. Once there is enough evidence to decide, the RTNet stops the accumulation process and makes a decision.
The researchers also timed the model’s decision-making speed to see whether it follows a psychological phenomenon called the “speed-accuracy trade-off” that dictates that humans are less accurate when they must make decisions quickly.
Once they had the model’s results, they compared them to humans’ results. Sixty Georgia Tech students viewed the same dataset and shared their confidence in their decisions, and the researchers found the accuracy rate, response time, and confidence patterns were similar between the humans and the neural network.
“Generally speaking, we don’t have enough human data in existing computer science literature, so we don’t know how people will behave when they are exposed to these images. This limitation hinders the development of models that accurately replicate human decision-making,” Rafiei said. “This work provides one of the biggest datasets of humans responding to MNIST.”
Not only did the team’s model outperform all rival deterministic models, but it also was more accurate in higher-speed scenarios due to another fundamental element of human psychology: RTNet behaves like humans. As an example, people feel more confident when they make correct decisions. Without even having to train the model specifically to favor confidence, the model automatically applied it, Rafiei noted.
“If we try to make our models closer to the human brain, it will show in the behavior itself without fine-tuning,” he said.
The research team hopes to train the neural network on more varied datasets to test its potential. They also expect to apply this BNN model to other neural networks to enable them to rationalize more like humans. Eventually, algorithms won’t just be able to emulate our decision-making abilities, but could even help offload some of the cognitive burden of those 35,000 decisions we make daily.
Information contained on this page is provided by an independent third-party content provider. This website makes no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact editor @americanfork.business