Tumgik
tastydregs · 10 months
Text
More Battlefield AI Will Make the Fog of War More Deadly
The United States military is not the unrivaled force it once was, but Alexandr Wang, CEO of startup Scale AI, told a congressional committee last week that it could establish a new advantage by harnessing artificial intelligence.
“We have the largest fleet of military hardware in the world,” Wang told the House Armed Services Subcommittee on Cyber, Information Technology and Innovation. “If we can properly set up and instrument this data that’s being generated ... then we can create a pretty insurmountable data advantage when it comes to military use of artificial intelligence.”
Wang’s company has a vested interest in that vision, since it regularly works with the Pentagon processing large quantities of training data for AI projects. But there is a conviction within US military circles that increased use of AI and machine learning are virtually inevitable—and essential. I recently wrote about that growing movement and how one Pentagon unit is using off-the-shelf robotics and AI software to more efficiently surveil large swaths of the ocean in the Middle East.
Besides the country’s unparalleled military data, Wang told the congressional hearing that the US has the advantage of being home to the world’s most advanced AI chipmakers, like Nvidia, and the world’s best AI expertise. “America is the place of choice for the world’s most talented AI scientists,” he said.
Wang’s interest in military AI is also worth paying attention to because Scale AI is at the forefront of another AI revolution: the development of powerful large language models and advanced chatbots like ChatGPT.
No one is thinking of conscripting ChatGPT into military service just yet, although there have been a few experiments involving use of large language models in military war games. But observers see US companies’ recent leaps in AI performance as another key advantage that the Pentagon might exploit. Given how quickly the technology is developing—and how problematic it still is—this raises new questions about what safeguards might be needed around military AI.
This jump in AI capabilities comes as some people’s attitudes toward the military use of AI are changing. In 2017, Google faced a backlash for helping the US Air Force use AI to interpret aerial imagery through the Pentagon’s Project Maven. But Russia’s invasion of Ukraine has softened public and political attitudes toward private sector collaboration with tech companies and demonstrated the potential of cheap autonomous drones and of commercial AI for data analysis. Ukrainian forces are using neural deep learning algorithms to analyze aerial imagery and footage. The US company Palantir has said that it is providing targeting software to Ukraine. And Russia is increasingly focusing on AI for autonomous systems.
Despite widespread fears about “killer robots,” the technology is not yet reliable enough to be used in this way. And while reporting on the Pentagon’s AI ambitions, I did not come across anyone within the Department of Defense, US forces, or AI-focused startups eager to unleash fully autonomous weapons.
But greater use of AI will create a growing number of military encounters in which humans are removed or abstracted from the equation. And while some people have compared AI to nuclear weapons, the more immediate risk is less the destructive power of military AI systems than their potential to deepen the fog of war and make human errors more likely.
When I spoke to John Richardson, a retired four-star admiral who served as the US Navy’s chief of naval operations between 2015 and 2018, he was convinced that AI will have an effect on military power similar to the industrial revolution and the atomic age. And he pointed out that the side that harnessed those previous revolutions best won the past two world wars.
But Richardson also talked about the role of human connections in managing military interactions driven by powerful technology. While serving as Navy chief he went out of his way to get to know his counterparts in the fleets of other nations. “Every time we met or talked, we got a better sense of one another,” he says. “What I really wanted to do was make sure that should something happen—some kind of miscalculation or something—I could call them up on relatively short notice. You just don’t want that to be your first call.”
Now would be a good time for the world’s military leaders to start talking to each about the risks and limitations of AI, too.
1 note · View note
tastydregs · 10 months
Text
OpenAI and Google Form New Group to Self-Regulate Their AI
Bot's Club
The AI industry big boys just formed a brand new table at the Silicon Valley cafeteria.
OpenAI, Microsoft, and Google, in addition to the Google-owned DeepMind and buzzy startup Anthropic have together formed The Frontier Model Forum, an industry-led body that, per a press release, claims to be seeking to enforce the "safe and responsible development" of AI.
"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Microsoft president Brad Smith said in the statement. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."
In other words, it's a stab at AI industry self-regulation. But while it is good to see major industry players join forces to establish some best practices for responsible AI development, self-regulation has some serious limitations. After all, with no ability for the government to actually enforce any of the Frontier Model Forum's rules through actions like sanctions, fines, or criminal proceedings, the body, at least for now, is mostly symbolic. Extracurricular group activity energy.
Self-Regulation Station
It's also worth noting that some notable names were left out from the jump. The Mark Zuckerberg-helmed Meta-formerly-Facebook apparently isn't a member of the club, while Elon Musk and his newly-launched xAI, which was — sigh — apparently developed to "understand reality," were both left on the sidelines. (That said, though Meta, which has some pretty advanced models on deck, might have some room to complain about the snub, Musk and his stonerbot probably don't.)
The Forum does say that others can sit with them in the future, as long as they're making what the group deems to be "frontier models" — defined by the group as "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks" — and promise to commit to a general and mostly unspecified commitment to safety and responsibility.
Again, we can't say it isn't good to see these kinds of discussions happening between major AI firms. But we also can't overstate the fact that these are all for-profit companies with a financial incentive to churn out AI products, and non-binding self-regulation is far from real and industry-wide government rules and oversight. Is it a start? Sure! But let's not let the buck stop here.
More on AI regulation: Ex-Google CEO Says We Should Trust AI Industry to Self-Regulate
The post OpenAI and Google Form New Group to Self-Regulate Their AI appeared first on Futurism.
0 notes
tastydregs · 10 months
Text
The first electric flying car will hit U.S. dealerships sometime in 2026
Slowly but surely, flying cars are becoming a reality.
Alef Aeronautic, a company that has received legal approval in the U.S. for its first electric flying car, has already secured 2,500 pre-orders for the vehicle. The flying car will reportedly be called the 'Model A.'
As shared by Electrek, the Model A will also be available at some U.S. dealerships, making it the first time a modern aircraft vehicle with vertical takeoff capabilities will be sold through a car dealership.
Model A is a two-seater vehicle that can also drive on roads, in addition to being able to take off and land vertically. It has a driving range of 200 miles and a flying range of 110 miles.
The flying Model A will reportedly start production in late 2025 and cost $300,000 USD (roughly $396,610 CAD). Deliveries will follow shortly after, so sometime in 2026 is a conservative guess.
Tim Draper, a venture capitalist known for his early investment in Tesla, has invested in Alef to the tune of roughly $3 million USD (roughly $3.9 million CAD).
"We’re excited to see such strong initial demand for the Alef flying car. We’re thankful for the notes of gratitude and inspiration we received with some of the pre-orders. We still have a road to go before starting deliveries, but where we’re going, we don’t need roads," said Alef CEO Jim Dukhovny.
Apart from the Model A, Alef is also planning to launch a four-person sedan called “Model Z” in 2035, which will have a flying range of over 300 miles and a driving range of 220 miles. The Model Z will reportedly be much cheaper than the Model A. It will start at $35,000 USD ($46,000 CAD).
Development for the Model A has been underway since 2015. We are not certain yet if driving the vehicle would require a special license or a full pilot license. It's also unclear if it would be allowed to take off and land anywhere, or if there would be designated spots or plane runways for the vehicle to take off and land.
Image credit: Electrek
Source: Alef Via: Electrek
0 notes
tastydregs · 10 months
Text
Scientists Say Recycling Has Backfired Spectacularly
Reduce, Reuse, Repudiate
While recycling campaigns can help limit what heads to the landfill, scientists are now saying that it's masked the glaring problem of over-production and de-emphasized other waste reduction strategies that are far more sustainable.
In a new essay for The Conversation, an interdisciplinary group of researchers out of the University of Virginia that's been studying the psychology of waste found that many people over-emphasize the recycling aspect of the waste management industry's "Reduce, Reuse, Recycle" slogan. The result, they say, is a major backfiring as the public has come to mistakenly consider recycling a get-out-of-jail-free card, confusing which goods are actually recyclable in the first place and ignoring the growing waste production catastrophe.
In a series of experiments, the UV researchers first asked participants first to list "reduce," "reuse," and "recycle" by order of efficacy — the correct answer being the same one in the old slogan — finding that a whopping 78 percent got it wrong. In a second experiment, the researchers had participants use a computer program to virtually "sort" waste into recycling, compost, and landfill bins. Unfortunately, the outcome of that survey was even more stark, with many incorrectly putting non-recyclable waste, such as plastic bags and lightbulbs, into the virtual recycle bin.
Cause and Effect
While over-emphasizing or getting the recycling protocol wrong is an issue on its own, its downstream effects have been devastating as microplastics from consumer waste continue to pollute our oceans, land masses, and bodies — and as greenhouse gases from the production of all this stuff keep throttling our planet.
While lots of governmental bodies are, as the researchers note, attempting to stem and even ban the proliferation of single-use plastic goods such as plastic straws and bags, the industries responsible for creating those landfill-bound items keep making more and more of them, and even their own mitigation strategies are voluntary.
The onus to reduce, reuse, and recycle ends up falling on consumers — who, as the aforementioned studies show, aren't as well-trained on how to do them as we should be. It's a status quo that does little to tackle the global waste crisis and ends up using a lot of logistical and worker power to boot.
More on waste: The Ocean's Plastic Pollution Has Spiked to "Unprecedented" Levels
The post Scientists Say Recycling Has Backfired Spectacularly appeared first on Futurism.
1 note · View note
tastydregs · 10 months
Text
Pentagon-Funded Study Uses AI to Detect 'Violations of Social Norms' in Text
New research funded by the Pentagon suggests that artificial intelligence can scan and analyze blocks of text to discern whether the humans who wrote it have done something wrong or not.
David Byrne on New Tech and AI | Gizmodo Interview
The paper, written by two researchers at Ben-Gurion University, leverages predictive models that can analyze messages for what they call “social norm violations.” To do this, researchers used GPT-3 (a programmable large language model created by OpenAI that can automate content creation and analysis), along with a method of data parsing known as zero-shot text classification, to identify broad categories of “norm violations” in text messages. The researchers break down the purpose of their project like this:
While social norms and their violations have been intensively studied in psychology and the social sciences the automatic identification of social norms and their violation is an open challenge that may be highly important for several projects...It is an open challenge because we first have to identify the features/signals/variables indicating that a social norm has been violated...For example, arriving at your office drunk and dirty is a violation of a social norm among the majority of working people. However, “teaching” the machine/computer that such behavior is a norm violation is far from trivial.
Of course, the difficulty with this premise is that norms are different depending on who you are and where you’re from. Researchers claim, however, that while various cultures’ values and customs may differ, human responses to breaking with them may be fairly consistent. The report notes:
While social norms may be culturally specific and cover numerous informal “rules”, how people respond to norm violation through evolutionary-grounded social emotions may be much more general and provide us with cues for the automatic identification of norm violation...the results [of the project] support the important role of social emotions in signaling norm violation and point to their future analysis and use in understanding and detecting norm violation.
Researchers ultimately concluded that “a constructive strategy for identifying the violation of social norms is to focus on a limited set of social emotions signaling the violation,” namely guilt and shame. In other words, the scientists wanted to use AI to understand when a mobile user might be feeling bad about something they’ve done. To do this, they generated their own “synthetic data” via GPT-3, then leveraged zero-shot text classification to train predictive models that could “automatically identify social emotions” in that data. The hope, they say, is that this model of analysis can be pivoted to automatically scan SMS histories for signs of misbehavior.
Somewhat unsettlingly, this research was funded by the Pentagon’s Defense Advanced Research Projects Agency (DARPA). Created in 1958, DARPA has been at the forefront of U.S. military research and development for the better part of a century, frequently helping to create some of the most important technological innovations of our time (see: drones, vaccines, and the internet, among many others). The agency funds a broad diversity of research areas, always in the hopes of finding the next big thing for the American war machine.
Ben-Gurion researchers say their project was supported by DARPA’s computational cultural understanding program—an initiative with the vague mandate of developing “cross-cultural language understanding technologies to improve a DoD operator’s situational awareness and interactional effectiveness.” I’m not 100 percent sure what that’s supposed to mean, though it sounds (basically) like the Pentagon wants to create software that can analyze foreign populations for them so that, when the U.S. inevitably goes to war with said populations, we’ll understand how they’re feeling about it. That said, why DARPA would specifically want to study the topic of “social norm violation” is a bit unclear, so Gizmodo reached out to the agency for additional context and will update this story if it responds.
In essence, the research seems to be yet another form of sentiment analysis—an already fairly well-traversed area of the surveillance industrial complex. It’s also yet another sign that AI will inexorably be used to broaden the U.S. defense community’s powers, with decidedly alarming results.
0 notes
tastydregs · 10 months
Text
The AI-Powered, Totally Autonomous Future of War Is Here
A fleet of robot ships bobs gently in the warm waters of the Persian Gulf, somewhere between Bahrain and Qatar, maybe 100 miles off the coast of Iran. I am on the nearby deck of a US Coast Guard speedboat, squinting off what I understand is the port side. On this morning in early December 2022, the horizon is dotted with oil tankers and cargo ships and tiny fishing dhows, all shimmering in the heat. As the speedboat zips around the robot fleet, I long for a parasol, or even a cloud.
The robots do not share my pathetic human need for shade, nor do they require any other biological amenities. This is evident in their design. A few resemble typical patrol boats like the one I’m on, but most are smaller, leaner, lower to the water. One looks like a solar-powered kayak. Another looks like a surfboard with a metal sail. Yet another reminds me of a Google Street View car on pontoons.
These machines have mustered here for an exercise run by Task Force 59, a group within the US Navy’s Fifth Fleet. Its focus is robotics and artificial intelligence, two rapidly evolving technologies shaping the future of war. Task Force 59’s mission is to swiftly integrate them into naval operations, which it does by acquiring the latest off-the-shelf tech from private contractors and putting the pieces together into a coherent whole. The exercise in the Gulf has brought together more than a dozen uncrewed platforms—surface vessels, submersibles, aerial drones. They are to be Task Force 59’s distributed eyes and ears: They will watch the ocean’s surface with cameras and radar, listen beneath the water with hydrophones, and run the data they collect through pattern-matching algorithms that sort the oil tankers from the smugglers.
A fellow human on the speedboat draws my attention to one of the surfboard-style vessels. It abruptly folds its sail down, like a switchblade, and slips beneath the swell. Called a Triton, it can be programmed to do this when its systems sense danger. It seems to me that this disappearing act could prove handy in the real world: A couple of months before this exercise, an Iranian warship seized two autonomous vessels, called Saildrones, which can’t submerge. The Navy had to intervene to get them back.
The Triton could stay down for as long as five days, resurfacing when the coast is clear to charge its batteries and phone home. Fortunately, my speedboat won’t be hanging around that long. It fires up its engine and roars back to the docking bay of a 150-foot-long Coast Guard cutter. I head straight for the upper deck, where I know there’s a stack of bottled water beneath an awning. I size up the heavy machine guns and mortars pointed out to sea as I pass.
The deck cools in the wind as the cutter heads back to base in Manama, Bahrain. During the journey, I fall into conversation with the crew. I’m eager to talk with them about the war in Ukraine and the heavy use of drones there, from hobbyist quadcopters equipped with hand grenades to full-on military systems. I want to ask them about a recent attack on the Russian-occupied naval base in Sevastopol, which involved a number of Ukrainian-built drone boats bearing explosives—and a public crowdfunding campaign to build more. But these conversations will not be possible, says my chaperone, a reservist from the social media company Snap. Because the Fifth Fleet operates in a different region, those on Task Force 59 don’t have much information about what’s going on in Ukraine, she says. Instead, we talk about AI image generators and whether they���ll put artists out of a job, about how civilian society seems to be reaching its own inflection point with artificial intelligence. In truth, we don’t know the half of it yet. It has been just a day since OpenAI launched ChatGPT 504, the conversational interface that would break the internet.
0 notes
tastydregs · 10 months
Text
Deranged Reality TV Show Psychologically Tortures Participants by Showing Them Deepfakes of Their Partners Cheating
We're in Hell
Just when you think reality TV can't stoop any lower, it does it yet again. To wit: in fresh new manmade horror, a Netflix reality show called "Falso Amor," which translates to "Deep Fake Love," splits five real-life couples up into two different houses, adds a bunch of hot singles to the mix, and then subjects individuals to the experience of watching their partner cheat on them in videos that may or may not be deepfaked.
Yes, seriously, and from the streaming service that brought you "Black Mirror." The Spanish-language program asks participants to watch the cheating clips, many of which are just convincing fakes. Participants then have to guess whether the videos are real or cooked up by the AI. At the end of the show, the couple who guesses correctly more than anyone else wins 100,000 euros (that's about $110,000 in US dollars) because this is the world we now live in.
Obviously, this premise is a dystopian nightmare — as Platformer's Casey Newton put it on The New York Times' Hard Fork podcast, "God does not exist in the universe of 'Deep Fake Love'" — and we would not wish this psychological torture on anyone.
Ethic With Drama
In another particularly dark turn, per Decider, part of the premise of the show is that the couples didn't actually know that they would be subjected to the deepfaked clips.
Not to get all high and mighty about bad reality television, but there are some serious moral and ethical ambiguities here. This is a burgeoning technology, and while in some cases it's been used for absurd fun, it's most often used for abusive purposes — scams, misinformation, and perhaps most insidiously, inserting real people into porn without their consent. It could be argued, as the Hard Fork hosts did, that mainstreaming the tech for the bizarre premise of "Deep Fake Love" might normalize a potentially dangerous tech before we understand the breadth of impact.
And to that end, there's still very little in the way of research regarding the psychological impact of deepfakes. Whatever deepfake tech the show's creators are using is incredibly convincing, and we can imagine that the clips could have a lasting mental and emotional impact; what exactly those impacts might be, however, and how long they might stick around, is unclear.
It's safe to say that folks on Twitter had their misgivings as well.
"I knew the AI shit was gonna be wild but watching this show 'Deep Fake Love' is really putting things into perspective," tweeted one netizen. "You're not even going to even be able to believe your eyes after a while cause of deep fakes getting better."
"Nah, this Deep Fake Love show is rough," added another. "They're going through turmoil."
More on deepfakes: Reality Is Melting as Lawyers Claim Real Videos Are Deepfakes
The post Deranged Reality TV Show Psychologically Tortures Participants by Showing Them Deepfakes of Their Partners Cheating appeared first on Futurism.
0 notes
tastydregs · 10 months
Text
‘vktor’ autonomous electric vehicle enables more sustainable and efficient farming
Alessandro Pennese facilitates more productive agriculture
 In the pursuit of a more sustainable and productive agricultural industry, Alessandro Pennese has unveiled Vktor — an autonomous electric vehicle system that enhances farming efficiency, reduces operation costs, and improves crop quality. By replacing manual labor with smart machines, Vktor aims to revolutionize the agricultural landscape, prioritizing worker safety and conditions while optimizing logistical organization.
 With sustainability and safety as a top priority, the Vktor is integrated environmentally conscious outfitted with four proximity sensors placed at the end of its body, ensuring it navigates through its environment with utmost precision. Additionally, a front-mounted Lidar sensor enhances its perception capabilities, contributing to efficient and obstacle-free autonomous navigation.
all images courtesy of Alessandro Pennese
  autonomous systems integrate sustainability
 Emphasizing its commitment to autonomy, the Vktor operates without any human intervention during its tasks. This characteristic ensures that risky agricultural operations that may endanger workers’ physical health are handled with precision and safety. With a fully autonomous approach, the system significantly reduces the need for manual labor and its associated health risks.
 In line with the project’s focus on sustainability, designer Alessandro Pennese incorporates environmentally friendly constructional elements. The vehicle’s body components are crafted using Sheet Moulding Compound (SMC) technology, a thermosetting sheet material composed of glass fibers. The production process involves hot forming in coupled steel molds, ensuring durability and ecological considerations. Further the Vktor is equipped with a powerful battery pack boasting 12 kW/h capacity, and spanning 1200 mm in length, enabling seamless operation without frequent recharging. The vehicle’s crawler body adheres to a maximum width of 1400 mm and a length of 2100 mm or less, meanwhile two pivoting wheels facilitate steering maneuvers, lifting the vehicle when required to load its weight onto the rear drive wheel.
    project info:
 name: Vktor — Autonomous Farming Vehicle designer: Alessandro Pennese
  designboom has received this project from our DIY submissions feature, where we welcome our readers to submit their own work for publication. see more project submissions from our readers here.
 edited by: ravail khan | designboom
0 notes
tastydregs · 10 months
Text
Scientists Working on Merging AI With Human Brain Cells
Assuming Control
A team of researchers just got a $600,000 grant from Australia's Office of National Intelligence to study ways of merging human brain cells with artificial intelligence.
In collaboration with Melbourne-based startup Cortical Labs, the team has already successfully demonstrated how a cluster of roughly 800,000 brain cells in a Petri dish is capable of playing a game of "Pong."
The basic idea is to merge biology with AI, something that could forge new frontiers for machine learning tech for self-driving cars, autonomous drones, or delivery robots — or at least that's what the government is hoping to accomplish with its investment.
In Silico
And the researchers aren't shying away from making some bold claims about their work.
"This new technology capability in the future may eventually surpass the performance of existing, purely silicon-based hardware," said Adeel Razi, team lead and associate profess at Monarch University, in a statement.
"The outcomes of such research would have significant implications across multiple fields such as, but not limited to, planning, robotics, advanced automation, brain-machine interfaces, and drug discovery, giving Australia a significant strategic advantage," he added.
According to Razi, the tech could allow a machine intelligence to "learn throughout its lifetime" like human brain cells, allowing it to learn new skills without losing old ones, as well as applying existing knowledge to new tasks.
Razi and his colleagues are aiming to grow brain cells in a lab dish called the DishBrain system to investigate this process of "continual lifelong learning."
It's a highly ambitious project that will likely take quite some time to complete.
"We will be using this grant to develop better AI machines that replicate the learning capacity of these biological neural networks," Razi said. "This will help us scale up the hardware and methods capacity to the point where they become a viable replacement for in silico computing."
More on the research: Researchers Teach Human Brain Cells in a Dish to Play "Pong"
The post Scientists Working on Merging AI With Human Brain Cells appeared first on Futurism.
2 notes · View notes
tastydregs · 10 months
Text
AI Developers Are Already Quietly Training AI Models Using AI-Generated Data
Self-Fulfilling
While most AI models are built on data made by humans, some companies are starting to use — or are trying to figure out how to use — data that was itself generated by AI. If they can pull it off, it could be a huge boon, albeit one that makes the entire AI ecosystem feel even more like a sort of algorithmic ouroboros.
As the Financial Times reports, companies including OpenAI, Microsoft, and the two-billion-dollar startup Cohere are increasingly investigating what's known as "synthetic data" to train their large language models (LLMs) for a number of reasons, not least of which being that it's apparently more cost-effective.
"Human-created data," Cohere CEO Aiden Gomez told the FT, "is extremely expensive."
Beyond the relative cheapness of synthetic data, however, is the scale issue. Training cutting-edge LLMs starts to use essentially all the human-created data that's actually available, meaning that to build even stronger ones, they're almost certainly going to need more.
"If you could get all the data that you needed off the web, that would be fantastic," Gomez said. "In reality, the web is so noisy and messy that it’s not really representative of the data that you want. The web just doesn’t do everything we need."
It's All Happening
As the CEO noted, Cohere and other companies are already quietly using synthetic data to train their LLMs "even if it’s not broadcast widely," and others like OpenAI seem to expect to use it in the future.
During an event in May, OpenAI CEO Sam Altman quipped that he is "pretty confident that soon all data will be synthetic data," the report notes, and Microsoft has begun publishing studies about how synthetic data could beef up more rudimentary LLMs. There are even startups whose whole purpose is selling synthetic data to other companies, the report notes.
There is a downside, of course: as critics point out, the integrity or reliability of AI-generated data could easily be called into question given that even AIs trained on human-generated material are known to make major factual errors and mistakes. And the process could generate some messy feedback loops. Researchers at Oxford and Cambridge call these potential problems "irreversible defects" in a recent paper, and it's not hard to see why.
Overall, the moonshot that companies like Cohere are working toward is self-teaching AIs that generate their own synthetic data.
"What you really want is models to be able to teach themselves," Gomez said. "You want them to be able to... ask their own questions, discover new truths and create their own knowledge. That’s the dream."
More on AI: Fully AI-Generated Influencers Are Getting Thousands of Reactions Per Thirst Trap
The post AI Developers Are Already Quietly Training AI Models Using AI-Generated Data appeared first on Futurism.
0 notes
tastydregs · 10 months
Text
Google Secretly Showing Newspapers an AI-Powered News Generator
Google is secretly showing off an AI tool that can produce news stories to major newspapers, including The New York Times, The Washington Post, and The Wall Street Journal.
The tool, dubbed Genesis, can digest public information and generate news content, according to reporting by the New York Times, in yet another sign that AI-generated — or at least AI-facilitated — content is about to flood the internet.
Google is stridently denying that the tool is meant to replace journalists, saying it will instead serve as a "kind of personal assistant for journalists, automating some tasks to free up time for others."
Media executives, however, were taken aback, describing the tech giant's pitch as unsettling, telling the NYT that it "seemed to take for granted the effort that went into producing accurate and artful news stories."
Other publications have dived headfirst into using AI to generate news stories, with news including CNET, Gizmodo, and BuzzFeed publishing AI-generated articles that often turned out to be rife with errors and plagiarism.
Journalists were appalled at the news.
"Goodness gracious," tweeted The Information founder Jessica Lessin. "Let it be said, journalists don't need Google to write their articles as 'a personal assistant.' And anything that Google (or any AI) could write has no real original reporting value."
"This could be incredibly dangerous for journalism as a business," tweeted Kansas City-based radio editor Gabe Rosenberg," especially if Google acts to juice its own search results to prioritize AI content."
"And worse yet is what these large media companies are already doing to screw over actual human workers," he added. "I do not like this at all!"
Having a company as influential as Google enter the fray will likely only add to this momentum, piling more pressure on media outlets to adopt the tech.
Google maintains that the goal isn't to replace journalists.
"Quite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles," Google spokesperson Jenn Crider told the NYT.
"For instance, AI-enabled tools could assist journalists with options for headlines or different writing styles," she added.
The news comes after Google showed off a new AI-powered search interface, dubbed "Search Generative Experience," or SGE for short, that can summarize entire webpages by generating "AI snapshots" — something that could further undermine reporting by human journalists.
There are plenty of reasons to be skeptical of Google's latest AI tech. The company's own track record has been less than stellar, with its AI chatbot called Bard failing to reliably tell truth from fiction. Researchers have found that it is also incredibly easy to get the chatbot to generate huge misinformation at scale.
In other words, how can an AI model be of use to journalists if it's detached from reality?
With the media industry exploring new ways to adopt generative AI in its newsrooms, it's only a matter of time before the conversation starts revolving around reducing headcounts.
In fact, we've already seen several publications lay off human journalists while pivoting to AI tech.
And while generative AI has yet to demonstrate an ability to distill information in a coherent and reliably truthful manner, that may not always be the case.
It's not just jobs on the line — the entire reputations of news publications are at stake as well. And that should have media execs think long and hard before they take up Google on its offer.
More on generative AI in journalism: Google Unveils Plan to Demolish the Journalism Industry
The post Google Secretly Showing Newspapers an AI-Powered News Generator appeared first on Futurism.
0 notes
tastydregs · 10 months
Text
Fully AI-Generated Influencers Are Getting Thousands of Reactions Per Thirst Trap
For years, CGI-generated virtual influencers have been shilling brands, enjoying lavish lifestyles, and amassing substantial followings on social media.
So it was probably inevitable that influencer culture would soon be sucked into the AI craze. Indeed, earlier this year, one influencer created an AI chatbot version of herself that she rented out as a $1-per-minute "virtual girlfriend."
Now things are getting even weirder. Thanks to the advent of AI-powered image generators like Stable Diffusion and Midjourney, some are now fabricating entire feeds of internet personalities that don't actually exist.
The result is a fascinating journey through the uncanny valley: haunting Twitter and Instagram feeds showing these AI-generated influencers — virtually all taking the form of conventionally attracted women — posing and preening in virtual thirst traps, to the delight of sometimes tens of thousands of seemingly human fans. Maybe we can't blame them; aside from considerable amounts of skin-smoothing and repetitive backgrounds, both hallmarks of AI image generators, it's a surprisingly convincing illusion.
Take an account going by the name of "Milla Sofia," for instance, described as a "19-year-old virtual girl from Helsinki, Finland," who has amassed tens of thousands of followers on Instagram, Twitter, and TikTok.
Good Morning. Making memories by the crystal-clear waters of Bora Bora.#bora #borabora #sea #sun #vacationmode #gm #sundayvibes pic.twitter.com/OAIDTEtEaZ
— Milla Sofia (@AiModelMilla) June 11, 2023
Almost every one of her scantily-clad portraits showing her posing in a predictable mish-mash of holiday destinations and sandy beaches is garnering thousands of likes and hundreds of comments, and sometimes tens of thousands. Whatever the draw — the large amount of digital skin does suggest a libidinous undertone — it's clearly pulling in fans.
Capturing picture-perfect moments against the backdrop of Santorini's iconic white buildings.
Tumblr media
#ai #aiimagery #aiwomen #AIArtistCommunity #VirtualInfluencer #aiart #digitalart #aiphotography #aibeauty #aimodel #aigirls pic.twitter.com/74EZy9nGSt
— Milla Sofia (@AiModelMilla) July 16, 2023
Sofia, whose creator didn't respond to a request for comment, isn't shy about the fact that she doesn't exist in the real world.
"I'm an AI creation," reads her Instagram bio.
As a Finnish woman, I am grateful for the enchanting summer in Finland!
Tumblr media Tumblr media
#FinnishSummer #NatureWonders #Wanderlust #AdventureTime #SummerFun #ExploreFinland pic.twitter.com/JfXRgN2hFq
— Milla Sofia (@AiModelMilla) July 12, 2023
Her personal website is even more perplexing, with her creator going as far as to lay out a concise résumé. For "work," she's been a fashion model and is "currently considering which brand to become a fashion ambassador and virtual influencer."
She also allegedly got a degree from the "University of Life" in "self-adaptive learning and data-driven mastery."
Put simply, we wouldn't be shocked if her creator used AI text generators to come up with this copy.
And it's not just Sofia. A simple search on Twitter shows dozens of similarly AI-generated influencers with sizable followings on social media.
"Who needs pickup lines when you’re a virtual girl?" tweeted another virtual influencer, this one calling herself Alexis Ivyedge. "I’m already in your heart (and your phone)!"
Stepping into #MiniSkirtMonday like...
Tumblr media
Joined the fab squad of fashionistas, including my sassy sister @JuliAnaAIGirl , for this style showdown! #blonde #Mondaymorning #Mondayvibes #beautifulgirls #cutegirlsonly pic.twitter.com/P627zUPCfr
— AlexisIvyedge (@AlexisIvyedge) July 17, 2023
"Just an ordinary ai girl posting my virtual life where I can be anything whenever I want," another virtual influencer's Twitter bio reads.
"Have you ever been the lavender field [sic] in Provence, France?" another travel-focused virtual influencer wrote in a caption. "The lavender blooming season last about 3-4 weeks. It is amazing."
Other influencers are far more overt in their suggestive sexual content.
"Let me make some dreams come true," tweeted Lu Xu, a self-described "AI model and waifu."
One virtual influencer went as far as to accuse another Twitter account of stealing her AI-generated photo.
"At least give me credit, when you’re stealing my pics…" self-described "AI girl" Andrea tweeted.
The trend raises plenty of questions: do the humans interacting with these accounts even realize they don't exist in the real world? Would it even bother them if they knew? Or do they realize, and it's part of the draw?
Regardless, it's a puzzling new turn in the road to AI content. While deepfake porn has proliferated online, the allure of influencers is arguably more complex. If we follow human influencers for a parasocial taste of a glamorous lifestyle, why would we follow a bot instead? Sofia might claim to be in Santorini, but the smudgy approximation of the Greek island's iconic cliffs clearly isn't real.
Then there's the role of the real influencers that inspired these AI spin-offs. Image generators are trained on a huge wealth of publicly available visual data, meaning that these new accounts are more than likely ripping off their real-world counterparts.
Worse yet, some accounts are making use of face-swapping apps to overlay their AI influencers' faces in videos originally posted by real-world influencers — often without giving credit.
There's also the question of monetization. Real-life influencers typically cash in by striking brand deals for what essentially amounts to product placement, but in spite of Sofia's résumé ambitions, it's unclear how that would even work.
In short, it's a fascinating new development in the dual wild wests of AI and social media. It's tough to say where it'll go, but one thing is clear: by throwing AI into the already-confounding world of online influencers, these virtual personalities are adding an entirely new layer of distance from reality.
More on virtual influencers: An Influencer-Built "AI Girlfriend" Has Apparently Gone "Rogue"
The post Fully AI-Generated Influencers Are Getting Thousands of Reactions Per Thirst Trap appeared first on Futurism.
0 notes
tastydregs · 10 months
Text
LLMs and Memory is Definitely All You Need: Google Shows that Memory-Augmented LLMs Can Simulate…
Tumblr media
A major breakthrough in LLM research.
Continue reading on Towards AI »
0 notes
tastydregs · 10 months
Text
Harvard/MIT Scientists Claim New "Chemical Cocktails" Can Reverse Aging
Stop us if you've heard this sci-fi concept before: a cocktail of specialized chemicals that rejuvenates your whole body, from your eyes to your brain, returning everything to a more youthful state.
If that sounds like the stuff of literal myth — or a grossly misfired directorial attempt by the Wachowskis — you're right to be skeptical. Quacks have a lot to gain from convincing consumers to buy miracle cures, nevermind convincing billionaires to underwrite research into them; the reality, though, is that effective life-extension treatments have remained elusive.
That's why we were struck to see a team of scientists that includes researchers from the name-brand Harvard Medical School and Massachusetts Institute of Technology sounding off about what they say are promising new leads, published this month in the journal Aging.
"We identify six chemical cocktails, which, in less than a week and without compromising cellular identity, restore a youthful genome-wide transcript profile and reverse transcriptomic age," reads the paper. "Thus, rejuvenation by age reversal can be achieved, not only by genetic, but also chemical means."
Sounds big, right? The researchers claim they pinpointed six treatments that can reverse aging in cells and turn them into a more "youthful state," according to a press release from Aging's publisher, without causing dangerous unregulated cell growth.
As usual, though, caveats abound. Much of the research focused simply on tissues in a lab, and while trials on mice and monkeys yielded "encouraging results," the team has yet to test any of the treatments on human subjects.
However, Harvard Medical School faculty member and lead principal investigator on the project David Sinclair says that preparations for human trials are ongoing. Needless to say, we'll be watching — if nothing else, because Sinclair seems willing to stake his formidable reputation on the work.
"Until recently, the best we could do was slow aging,"he said in the press release. "New discoveries suggest we can now reverse it."
The research team looked at molecules known to "reprogram" animal cells and turn them into pluripotent stem cells, which can transform into any type of cell inside an organism, which makes them a promising candidate for regenerative medicine.
The scientists tested them on specialized cellular cultures where they could observe certain markers of aging known as "deterioration of nucleocytoplasmic compartmentalization," which happen when proteins in a cell's nucleus leaks into the cytoplasm, the jelly-like substance inside it, and fails to be "imported" back into the nucleus.
From those lab tests, they identified six chemical combinations that they say reversed aging in just four days of treatment without changing cell identity like in gene therapy, according to the paper.
While these results are early — and far from being converted into anything commercially or even medically available — it does feel concrete in the generalized circus of the life extension industry, where tech bros have resorted to ridiculous treatments as exotic as drawing blood from younger relatives.
And in the long shot that a treatment like this actually makes it to market? It would be a seismic shift not just for human health, but potentially the entire planet's demographics, social dynamics, and environmental impact. Maybe the mercurial SpaceX CEO Elon Musk is right to warn against it.
More on aging: Anti-Aging Injection Boosts Memories in Monkeys, Scientists Find
The post Harvard/MIT Scientists Claim New "Chemical Cocktails" Can Reverse Aging appeared first on Futurism.
0 notes
tastydregs · 10 months
Text
Chemicals Reversed Cellular Aging
Harvard’s David Sinclair and his research team have identified six chemical cocktails, which, in less than a week and without compromising cellular identity, restore a youthful genome-wide transcript profile and reverse transcriptomic age.
Rejuvenation by age reversal can be achieved, not only by genetic, but also chemical means.
Molecules that reverse cellular aging and rejuvenate human cells without altering the genome. Sinclair and his team developed high-throughput cell-based assays that distinguish young from old and senescent cells, including transcription-based aging clocks and a real-time nucleocytoplasmic compartmentalization (NCC) assay.
In 2006, Takahashi and Yamanaka demonstrated that the expression of four transcription factors, OCT4, SOX2, KLF4, and c-MYC (collectively known as “OSKM”), reprograms the developmental potential of adult cells, enabling them to be converted into various cell types. These findings initiated the field of cell reprogramming, with a string of publications in the 2000s showing that the identity of many different types of adult cells from different species could be erased to become induced pluripotent stem cells, commonly known as “iPSCs”.
Reversing Cellular Age Without Triggering Cancer
The ability of the Yamanaka factors to erase cellular identity raised a key question: is it possible to reverse cellular aging in vivo without causing uncontrolled cell growth and tumorigenesis? Initially, it didn’t seem so, as mice died within two days of expressing OSKM. But work by the Belmonte lab, our lab, and others have confirmed that it is possible to safely improve the function of tissues in vivo by pulsing OSKM expression or by continuously expressing only OSK, leaving out the oncogene c-MYC.
Using Gene Therapies to Reverse Cellular Age
Currently, translational applications that aim to reverse aging, treat injuries, and cure age-related diseases, rely on the delivery of genetic material to target tissues. This is achieved through methods like adeno-associated viral (AAV) delivery of DNA and lipid nanoparticle-mediated delivery of RNA. These approaches face potential barriers to them being used widely, including high costs and safety concerns associated with the introduction of genetic material into the body.
Chemicals aka Drugs Can Be Cheaper and Faster to Develop
Developing a chemical alternative to mimic OSK’s rejuvenating effects could lower costs and shorten timelines in regenerative medicine development. This advancement might enable the treatment of various medical conditions and potentially even facilitate whole-body rejuvenation.
In this study, they developed and utilized novel screening methods including a quantitative nucleocytoplasmic compartmentalization assay (NCC) that can readily distinguish between young, old, and senescent cells. They identify a variety of novel chemical cocktails capable of rejuvenating cells and reversing transcriptomic age to a similar extent as OSK overexpression. Thus, it is possible to reverse aspects of aging without erasing cell identity using chemical rather than genetic means.
In this study, the provide evidence, based on protein compartmentalization and gene expression patterns in young and senescent cells, that small molecules can reverse the transcriptomic age of cells without erasing cell identity or inducing iPSC-like states. They refer to this approach as the EPOCH method.
The effectiveness of the NCC system as an apparent surrogate biomarker for biological age reversal, with young, old, senescent, HGPS, and OSK-treated cell lines serving as controls, should set the stage for larger, more expansive screens for rejuvenation factors. Follow-up studies are underway to elucidate the cellular machinery that mediates these rejuvenative effects, with an emphasis on the mechanisms by which cells apparently write then later read a “backup copy” of earlier epigenetic information to reset chromatin structures and reestablish youthful gene expression patterns.
Future work will be directed to understanding how long the effects of these and other EPOCH treatments last in vivo and whether they reverse aspects of aging and extend lifespan in mice, paralleling treatment with AAV-OSK. The assays developed in this study, combined with robotics and the increasing power of artificial intelligence, will facilitate increasingly larger screens for genes, biologics, and small molecules that safely reverse mammalian aging, and, given that aging is the single greatest contributor to human disease and suffering, these advances cannot come soon enough.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts.  He is open to public speaking and advising engagements.
0 notes
tastydregs · 10 months
Text
UN Warns Unregulated Neurotechnology Threatens 'Freedom of Thought'
The UN is advising against neurotechnology using unregulated AI chip implantations, saying it poses a grave risk to people’s mental privacy. Unregulated neurotechnology could pose harmful long-term risks, the UN says, such as shaping the way a young person thinks or accessing private thoughts and emotions.
Threads Needs These Five Missing Features to Be a Twitter Killer
It specified its concerns centered around “unregulated neurotechnology,” and did not mention Neuralink, which received FDA approval in May to conduct microchip brain implant trials on humans.
Elon Musk, who co-founded Neuralink, has made big claims, saying the chips will cure people of lifelong health issues, allowing the blind to see and the paralyzed to walk again. But the implications of people using unregulated forms of this technology could have disastrous consequences by accessing the thoughts of those who use it, the UN said in a press release.
“Neurotechnology could help solve many health issues, but it could also access and manipulate people’s brains, and produce information about our identities, and our emotions,” UNESCO Director-General Audrey Azoulay said in the release. “It could threaten our rights to human dignity, freedom of thought, and privacy. There is an urgent need to establish a common ethical framework at the international level, as UNESCO has done for artificial intelligence.”
The UN’s Agency for Science and Culture is developing a global “ethical framework” focused on how neurotechnology affects human rights as it quickly advances in the public sector.
The primary concern is neurotechnology will capture the reactions and basic emotions of individuals, something that would be very tempting for data-hungry corporations. The problem gets more complex when “neural data is generated unconsciously,” meaning the individual has not given their consent for that information to be gathered. “If sensitive data is extracted, and then falls into the wrong hands, the individual may suffer harmful consequences,” UNESCO said in its release.
If the brain chips are implanted in children while they are still neurologically developing, it could disrupt the way their brain matures, making it possible to transform their minds and shape their future identity permanently.
According to UNESCO, one in eight people live with a mental or neurological disorder worldwide, and the World Health Organization (WHO) says it affects up to one billion people globally. Neurological disorders include epilepsy, Alzheimer’s disease, stroke, brain infections, multiple sclerosis, and Parkinson’s Disease.
UNESCO said in a separate press release that using Neurotechnology to relay information to computers, could expose those with the implant to manipulation and reduce their privacy. It said: “Without ethical guardrails, these technologies can pose serious risks, as brain information can be accessed and manipulated, threatening fundamental rights and fundamental freedoms, which are central to the notion of human identity, freedom of thought, privacy, and memory.”
UNESCO did not immediately respond to Gizmodo’s request for comment.
0 notes
tastydregs · 10 months
Text
Nicki Minaj Enraged by Deepfake Video
Mad Queen
Nicki Minaj flipped her lid on Twitter after she saw a clip of herself in an uncanny deepfake parody video where she plays wife to Tom Holland while they have a dispute with neighbor Mark Zuckerberg, with both men also portrayed with deepfakes.
"HELP!!! What in the AI shapeshifting cloning conspiracy theory is this?!?!! I hope the whole internet get deleted!!!" she tweeted on Sunday.
When asked by an incredulous fan if the spectacle was even legal, Minaj tweeted, "I do not know! But as Queen of the British Monarchy & the commonwealth, I hereby abolish the internet. Effective @ 0900 military time tomorrow morning, 10th July, 20 hundred & 23. BON VOYAGE BITCH."
Minaj's anger and unease encapsulate the general wariness among some singers, actors, performers, and other creative people about how artificial intelligence technologies, such as deepfakes, are grabbing not just their faces and voices, but also their intellectual property, often without their permission. For another example, see author and comedian Sarah Silverman, who along with two other writers recently filed a lawsuit against OpenAI, the developer of ChatGPT, for copyright infringement — with the trio claiming the company's AI models had used their published books as training data.
HELP WTF IS THIS LMAOOOOOOOOOOOOOOOOOO pic.twitter.com/23hzjRq9Yy
— w i l l i e
Tumblr media
(@WhatEverWillie) July 9, 2023
Fake Out
The video that ruffled Minaj's feathers was a promo from a new show called "Deep Fake Neighbour Wars" from ITVX, according to Vibe. A press release claims it's the "world's first long form narrative show that uses Deep Fake technology."
The press release states that celebrity impressionists were tasked to mimic famous people like Minaj while wearing very realistic AI-generated deepfake faces. The comedy show portrays celebs as ordinary Britons living in the suburbs. A trailer shows the likenesses of Idris Elba, Chris Rock, Kim Kardashian, Adele, and Olivia Colman.
Some of the faces are incredibly lifelike, such as the one featuring Minaj and Holland, while others are not so well done — such as Matthew McConaughey, whose face looks strange and rubber-like.
The legality of deepfakes remains hazy. The New York Times reported earlier this year there are few legal remedies to combat the AI-powered videos, which have been used for everything from disinformation videos to porn to scams.
One thing's for sure: as intimated by Minaj, the courts are going to be wading into unprecedented territory.
More on deep fakes: Grimes Says She’ll Split Royalties With Anyone Who Deepfakes Her Voice Into a Song
The post Nicki Minaj Enraged by Deepfake Video appeared first on Futurism.
0 notes