AI Generated Texts & its Limitations
Artificial intelligence has come to a point where it can write a text that sounds so human, most reading it couldn’t even tell the difference. Such AI programs are being used to create and spread fake political news and AI-written blog posts that seem authentic.
According to a research done by Chu-Cheng Lin, a Ph.D. candidate in the Whiting School of Engineering’s Department of Computer Science. It mentions that even though autoregressive models can successfully fool most humans, its capabilities are still going to be limited.
Lin states that their work revealed that certain desired qualities of intelligence like the ability to form consistent arguments without errors, will never emerge with a reasonably sized or fast autoregression model.
Lin’s research showed that autoregressive models have a linear thought process that cannot utilize reasoning because they are designed to very quickly predict the next word using previous words. Unlike how humans are when writing, the models are not built to backtrack, edit, or change their work.
Human vs. AI
Humans, when writing, usually have multiple edits before producing a final version. The final product may display a perfect work, but it is most definitely not done in a single pass without edits. When training AI models by having them mimic human writing, the models do not observe the multiple revisions and rewritings that happen before the final product.
Lin’s team also showed that current autoregressive models have another weakness. The models do not give the computer enough time to “think” ahead about what it should say after the next word, so there is no guarantee that what it says will not be nonsense.
Autoregressive models have proved to be useful in certain scenarios, but they are not appropriate computational models for reasoning. The results also suggest that certain elements of intelligence do not emerge if they only try to get machines to mimic how humans speak.
The more text that autoregressive models produce,the more obvious the mistakes made become. This putting the text at risk of being flagged or noticed by less advanced computer programs that require fewer resources to be effective at distinguishing the difference between what was written by an autoregressive models, and what was written by a human.
The Future of AI Generated Text
Lin believes that the positives of having AI that can use reasoning completely outweigh the negatives, even though a negative could be misinformation spreading, since computers can distinguish between writings originating from an autoregressive model and those originating from humans. “Text summarization” is an example of how reasoning-capable AI could be useful, according to him.
Lin explains that computers can read a lengthy article, or table containing numbers and text, and then explain what’s going on in a few sentences. “For example, summarizing a news story, or a restaurant’s Yelp ratings,” Lin said. “Models that are capable of reasoning can generate texts that are more on the spot, and more factually accurate, too.”
Lin has been working on this research, which is part of his thesis, for several years with his adviser, Professor Jason Eisner. The findings of this research will be used in the design of a neural network architecture for his thesis research on “Neural Regular Expressions for Making AI Smarter.”
As Lin explained, NREs can be used to create a dialog system in which machines can determine unobservable things from human interaction, like intent, using rules predefined by humans. These unobservable things can then be used to shape the machine’s response.
Source: TechXplore