Original
It had generated a complete reference with title, author, and everything. Completely fabricated.
It’s not a search engine, it’s a crappy imitation of one.
16K notes
·
View notes
Detecting AI-generated research papers through "tortured phrases"
So, a recent paper found and discusses a new way to figure out if a "research paper" is, in fact, phony AI-generated nonsense. How, you may ask? The same way teachers and professors detect if you just copied your paper from online and threw a thesaurus at it!
It looks for “tortured phrases”; that is, phrases which resemble standard field-specific jargon, but seemingly mangled by a thesaurus. Here's some examples (transcript below the cut):
profound neural organization - deep neural network
(fake | counterfeit) neural organization - artificial neural network
versatile organization - mobile network
organization (ambush | assault) - network attack
organization association - network connection
(enormous | huge | immense | colossal) information - big data
information (stockroom | distribution center) - data warehouse
(counterfeit | human-made) consciousness - artificial intelligence (AI)
elite figuring - high performance computing
haze figuring - fog/mist/cloud computing
designs preparing unit - graphics processing unit (GPU)
focal preparing unit - central processing unit (CPU)
work process motor - workflow engine
facial acknowledgement - face recognition
discourse acknowledgement - voice recognition
mean square (mistake | blunder) - mean square error
mean (outright | supreme) (mistake | blunder) - mean absolute error
(motion | flag | indicator | sign | signal) to (clamor | commotion | noise) - signal to noise
worldwide parameters - global parameters
(arbitrary | irregular) get right of passage to - random access
(arbitrary | irregular) (backwoods | timberland | lush territory) - random forest
(arbitrary | irregular) esteem - random value
subterranean insect (state | province | area | region | settlement) - ant colony
underground creepy crawly (state | province | area | region | settlement) - ant colony
leftover vitality - remaining energy
territorial normal vitality - local average energy
motor vitality - kinetic energy
(credulous | innocent | gullible) Bayes - naïve Bayes
individual computerized collaborator - personal digital assistant (PDA)
86 notes
·
View notes
Thought: we shouldn't be calling all these "AI" things Artificial Intelligence.
Instead, I propose we use the term "Algorithmic Generators", or "AG" for short, for these types of things.
Because that better explains what they actually are, and also doesn't incorrectly peg them as "intelligent" or cause confusion about what AI actually mean anymore.
106 notes
·
View notes
I need to know what a large language model trained only on Tumblr content would be like. But, also don't want it to take over the world. Because, obviously it would.
94 notes
·
View notes
When questioned, chatgpt doubles down on how it is definitely correct.
But it's not relying on some weird glitchy interpretation of the art itself, a la adversarial turtle-gun. It just reports the drawing as definitely being of the word "lies" because that kind of self-consistency is what would happen in the kind of human-human conversations in its internet training data. I tested this by starting a brand new chat and then asking it what the art from the previous chat said.
Google's bard, on the other hand, interprets it differently
Bard has the same tendency to generate illegible ASCII art and then praise its legibility, except in its case, all its art is cows.
Not to be outdone, bing chat (GPT-4) will also praise its own ASCII art - once you get it to admit it even can generate and rate ASCII art. For the "balanced" and "precise" versions I had to make my request all fancy and quantitative.
With Bing chat I wasn't able to ask it to read its own ASCII art because it strips out all the formatting and is therefore illegible - oh wait, no, even the "precise" version tries to read it anyways.
These language models are so unmoored from the truth that it's astonishing that people are marketing them as search engines.
More at AI Weirdness
6K notes
·
View notes
I came across this neural network that turns people photos into anime and here are the results:
Q looks like a typical anime villain but OMG he's hot 🔥
Definitely, this neural network has a problem with the location of the age. And with bald people...
But the Starfleet uniform looks great!✨
*Here is a link to this neural network if you are interested heh*
67 notes
·
View notes