Tumgik
#A.I. & Neural Networks news
jcmarchi · 1 month
Text
Romanian AI Helps Farmers and Institutions Get Better Access to EU Funds - Technology Org
New Post has been published on https://thedigitalinsider.com/romanian-ai-helps-farmers-and-institutions-get-better-access-to-eu-funds-technology-org/
Romanian AI Helps Farmers and Institutions Get Better Access to EU Funds - Technology Org
A Romanian state agency overseeing rural investments has adopted artificial intelligence to aid farmers in accessing European Union funds.
Gardening based on aquaculture technology. Image credit: sasint via Pixabay, free license
The Agency for Financing Rural Investments (AFIR) revealed that it integrated robots from software automation firm UiPath approximately two years ago. These robots have assumed the arduous task of accessing state databases to gather land registry and judicial records required by farmers, entrepreneurs, and state entities applying for EU funding.
George Chirita, director of AFIR, emphasized the role of AI-driven automation was groundbreaking in expediting the most important organizational processes for farmers, thereby enhancing their efficiency. Since the introduction of these robots, AFIR has managed financing requests totaling 5.32 billion euros ($5.75 billion) from over 50,000 beneficiaries, including farmers, businesses, and local institutions.
The implementation of robots has notably saved AFIR staff approximately 784 days’ worth of document searches. Over the past two decades, AFIR has disbursed funds amounting to 21 billion euros.
Despite Romania’s burgeoning status as a technology hub with a highly skilled workforce, the nation continues to lag behind its European counterparts in offering digital public services to citizens and businesses, and in effectively accessing EU development funds. Eurostat data from 2023 indicated that only 28% of Romanians possessed basic digital skills, significantly below the EU average of 54%. Moreover, Romania’s digital public services scored 45, well below the EU average of 84.
UiPath, the Romanian company valued at $13.3 billion following its public listing on the New York Stock Exchange, also provides automation solutions to agricultural agencies in other countries, including Norway and the United States.
Written by Vytautas Valinskas
2 notes · View notes
jcmarchi · 4 months
Text
The Way the Brain Learns is Different from the Way that Artificial Intelligence Systems Learn - Technology Org
New Post has been published on https://thedigitalinsider.com/the-way-the-brain-learns-is-different-from-the-way-that-artificial-intelligence-systems-learn-technology-org/
The Way the Brain Learns is Different from the Way that Artificial Intelligence Systems Learn - Technology Org
Researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science have set out a new principle to explain how the brain adjusts connections between neurons during learning.
This new insight may guide further research on learning in brain networks and may inspire faster and more robust learning algorithms in artificial intelligence.
Study shows that the way the brain learns is different from the way that artificial intelligence systems learn. Image credit: Pixabay
The essence of learning is to pinpoint which components in the information-processing pipeline are responsible for an error in output. In artificial intelligence, this is achieved by backpropagation: adjusting a model’s parameters to reduce the error in the output. Many researchers believe that the brain employs a similar learning principle.
However, the biological brain is superior to current machine learning systems. For example, we can learn new information by just seeing it once, while artificial systems need to be trained hundreds of times with the same pieces of information to learn them.
Furthermore, we can learn new information while maintaining the knowledge we already have, while learning new information in artificial neural networks often interferes with existing knowledge and degrades it rapidly.
These observations motivated the researchers to identify the fundamental principle employed by the brain during learning. They looked at some existing sets of mathematical equations describing changes in the behaviour of neurons and in the synaptic connections between them.
They analysed and simulated these information-processing models and found that they employ a fundamentally different learning principle from that used by artificial neural networks.
In artificial neural networks, an external algorithm tries to modify synaptic connections in order to reduce error, whereas the researchers propose that the human brain first settles the activity of neurons into an optimal balanced configuration before adjusting synaptic connections.
The researchers posit that this is in fact an efficient feature of the way that human brains learn. This is because it reduces interference by preserving existing knowledge, which in turn speeds up learning.
Writing in Nature Neuroscience, the researchers describe this new learning principle, which they have termed ‘prospective configuration’. They demonstrated in computer simulations that models employing this prospective configuration can learn faster and more effectively than artificial neural networks in tasks that are typically faced by animals and humans in nature.
The authors use the real-life example of a bear fishing for salmon. The bear can see the river and it has learnt that if it can also hear the river and smell the salmon it is likely to catch one. But one day, the bear arrives at the river with a damaged ear, so it can’t hear it.
In an artificial neural network information processing model, this lack of hearing would also result in a lack of smell (because while learning there is no sound, backpropagation would change multiple connections including those between neurons encoding the river and the salmon) and the bear would conclude that there is no salmon, and go hungry.
But in the animal brain, the lack of sound does not interfere with the knowledge that there is still the smell of the salmon, therefore the salmon is still likely to be there for catching.
The researchers developed a mathematical theory showing that letting neurons settle into a prospective configuration reduces interference between information during learning. They demonstrated that prospective configuration explains neural activity and behaviour in multiple learning experiments better than artificial neural networks.
Lead researcher Professor Rafal Bogacz of MRC Brain Network Dynamics Unit and Oxford’s Nuffield Department of Clinical Neurosciences says: ‘There is currently a big gap between abstract models performing prospective configuration, and our detailed knowledge of anatomy of brain networks. Future research by our group aims to bridge the gap between abstract models and real brains, and understand how the algorithm of prospective configuration is implemented in anatomically identified cortical networks.’
The first author of the study Dr Yuhang Song adds: ‘In the case of machine learning, the simulation of prospective configuration on existing computers is slow, because they operate in fundamentally different ways from the biological brain. A new type of computer or dedicated brain-inspired hardware needs to be developed, that will be able to implement prospective configuration rapidly and with little energy use.’
Source: University of Oxford
You can offer your link to a page which is relevant to the topic of this post.
2 notes · View notes
jcmarchi · 5 months
Text
Open-Source Platform Cuts Costs for Running AI - Technology Org
New Post has been published on https://thedigitalinsider.com/open-source-platform-cuts-costs-for-running-ai-technology-org/
Open-Source Platform Cuts Costs for Running AI - Technology Org
Cornell researchers have released a new, open-source platform called Cascade that can run artificial intelligence (AI) models in a way that slashes expenses and energy costs while dramatically improving performance.
Artificial intelligence hardware – artistic interpretation. Image credit: Alius Noreika, created with AI Image Creator
Cascade is designed for settings like smart traffic intersections, medical diagnostics, equipment servicing using augmented reality, digital agriculture, smart power grids and automatic product inspection during manufacturing – situations where AI models must react within a fraction of a second. It is already in use by College of Veterinary Medicine researchers monitoring cows for risk of mastitis.
With the rise of AI, many companies are eager to leverage new capabilities but worried about the associated computing costs and the risks of sharing private data with AI companies or sending sensitive information into the cloud – far-off servers accessed through the internet.
Also, today’s AI models are slow, limiting their use in settings where data must be transferred back and forth or the model is controlling an automated system. 
A team led by Ken Birman, professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, combined several innovations to address these concerns.
Birman partnered with Weijia Song, a senior research associate, to develop an edge computing system they named Cascade. Edge computing is an approach that places the computation and data storage closer to the sources of data, protecting sensitive information. Song’s “zero copy” edge computing design minimizes data movement.
The AI models don’t have to wait to fetch data when reacting to an event, which enables faster responses, the researchers said.
“Cascade enables users to put machine learning and data fusion really close to the edge of the internet, so artificially intelligent actions can occur instantly,” Birman said. “This contrasts with standard cloud computing approaches, where the frequent movement of data from machine to machine forces those same AIs to wait, resulting in long delays perceptible to the user.” 
Cascade is giving impressive results, with most programs running two to 10 times faster than cloud-based applications, and some computer vision tasks speeding up by factors of 20 or more. Larger AI models see the most benefit.
Moreover, the approach is easy to use: “Cascade often requires no changes at all to the AI software,” Birman said.
Alicia Yang, a doctoral student in the field of computer science, was one of several student researchers in the effort. She developed Navigator, a memory manager and task scheduler for AI workflows that further boosts performance.
“Navigator really pays off when a number of applications need to share expensive hardware,” Yang said. “Compared to cloud-based approaches, Navigator accomplishes the same work in less time and uses the hardware far more efficiently.”
In CVM, Parminder Basran, associate research professor of medical oncology in the Department of Clinical Sciences, and Matthias Wieland, Ph.D. ’21, assistant professor in the Department of Population Medicine and Diagnostic Sciences, are using Cascade to monitor dairy cows for signs of increased mastitis – a common infection in the mammary gland that reduces milk production.
By imaging the udders of thousands of cows during each milking session and comparing the new photos to those from past milkings, an AI model running on Cascade identifies dry skin, open lesions, rough teat ends and other changes that may signal disease. If early symptoms are detected, cows could be subjected to a medicinal rinse at the milking station to potentially head off a full-blown infection.
Thiago Garrett, a visiting researcher from the University of Oslo, used Cascade to build a prototype “smart traffic intersection.”
His solution tracks crowded settings packed with people, cars, bicycles and other objects, anticipates possible collisions and warns of risks – within milliseconds after images are captured. When he ran the same AI model on a cloud computing infrastructure, it took seconds to sense possible accidents, far too late to sound a warning.
With the new open-source release, Birman’s group hopes other researchers will explore possible uses for Cascade, making AI applications more widely accessible.
“Our goal is to see it used,” Birman said. “Our Cornell effort is supported by the government and many companies. This open-source release will allow the public to benefit from what we created.”
Source: Cornell University
You can offer your link to a page which is relevant to the topic of this post.
2 notes · View notes
jcmarchi · 5 months
Text
AI Algorithm Improves Predictive Models of Complex Dynamical Systems - Technology Org
New Post has been published on https://thedigitalinsider.com/ai-algorithm-improves-predictive-models-of-complex-dynamical-systems-technology-org/
AI Algorithm Improves Predictive Models of Complex Dynamical Systems - Technology Org
Researchers at the University of Toronto have made a significant step towards enabling reliable predictions of complex dynamical systems when there are many uncertainties in the available data or missing information.
Artificial intelligence – artistic concept. Image credit: geralt via Pixabay, free license
In a recent paper published in Nature, Prasanth B. Nair, a professor at the U of T Institute of Aerospace Studies (UTIAS) in the Faculty of Applied Science & Engineering, and UTIAS PhD candidate Kevin Course introduced a new machine learning algorithm that surmounts the real-world challenge of imperfect knowledge about system dynamics.
The computer-based mathematical modelling approach is used for problem solving and better decision making in complex systems, where many components interact with each other.  
The researchers say the work could have numerous applications ranging from predicting the performance of aircraft engines to forecasting changes in global climate or the spread of viruses.  
From left to right: Professor Prasanth Nair and PhD student Kevin Course are the authors of a new paper in Nature that introduces a new machine learning algorithm that addresses the challenge of imperfect knowledge about system dynamics. Image credit: University of Toronto
“For the first time, we are able to apply state estimation to problems where we don’t know the governing equations, or the governing equations have a lot of missing terms,” says Course, who is the paper’s first author.   
“In contrast to standard techniques, which usually require a state estimate to infer the governing equations and vice-versa, our method learns the missing terms in the mathematical model and a state estimate simultaneously.”  
State estimation, also known as data assimilation, refers to the process of combining observational data with computer models to estimate the current state of a system. Traditionally, it requires strong assumptions about the type of uncertainties that exist in a mathematical model.   
“For example, let’s say you have constructed a computer model that predicts the weather and at the same time, you have access to real-time data from weather stations providing actual temperature readings,” says Nair. “Due to the model’s inherent limitations and simplifications – which is often unavoidable when dealing with complex real-world systems – the model predictions may not match the actual observed temperature you are seeing.  
“State estimation combines the model’s prediction with the actual observations to provide a corrected or better-calibrated estimate of the current temperature. It effectively assimilates the data into the model to correct its state.”  
However, it has been previously difficult to estimate the underlying state of complex dynamical systems in situations where the governing equations are completely or partially unknown. The new algorithm provides a rigorous statistical framework to address this long-standing problem.  
“This problem is akin to deciphering the ‘laws’ that a system obeys without having explicit knowledge about them,” says Nair, whose research group is developing algorithms for mathematical modelling of systems and phenomena that are encountered in various areas of engineering and science.  
A byproduct of Course and Nair’s algorithm is that it also helps to characterize missing terms or even the entirety of the governing equations, which determine how the values of unknown variables change when one or more of the known variables change.   
The main innovation underpinning the work is a reparametrization trick for stochastic variational inference with Markov Gaussian processes that enables an approximate Bayesian approach to solve such problems. This new development allows researchers to deduce the equations that govern the dynamics of complex systems and arrive at a state estimate using indirect and “noisy” measurements.  
“Our approach is computationally attractive since it leverages stochastic – that is randomly determined – approximations that can be efficiently computed in parallel and, in addition, it does not rely on computationally expensive forward solvers in training,” says Course.   
While Course and Nair approached their research from a theoretical viewpoint, they were able to demonstrate practical impact by applying their algorithm to problems ranging from modelling fluid flow to predicting the motion of black holes.   
“Our work is relevant to several branches of sciences, engineering and finance as researchers from these fields often interact with systems where first-principles models are difficult to construct or existing models are insufficient to explain system behaviour,” says Nair.  
“We believe this work will open the door for practitioners in these fields to better intuit the systems they study,” adds Course. “Even in situations where high-fidelity mathematical models are available, this work can be used for probabilistic model calibration and to discover missing physics in existing models.   
“We have also been able to successfully use our approach to efficiently train neural stochastic differential equations, which is a type of machine learning model that has shown promising performance for time-series datasets.”    
While the paper primarily addresses challenges in state estimation and governing equation discovery, the researchers say it provides a general groundwork for robust data-driven techniques in computational science and engineering.  
“As an example, our research group is currently using this framework to construct probabilistic reduced-order models of complex systems. We hope to expedite decision-making processes integral to the optimal design, operation and control of real-world systems,” says Nair.   
“Additionally, we are also studying how the inference methods stemming from our research may offer deeper statistical insights into stochastic differential equation-based generative models that are now widely used in many artificial intelligence applications.” 
Source: University of Toronto
You can offer your link to a page which is relevant to the topic of this post.
2 notes · View notes
jcmarchi · 5 months
Text
Fruit flies could hold the key to building resiliency in autonomous robots - Technology Org
New Post has been published on https://thedigitalinsider.com/fruit-flies-could-hold-the-key-to-building-resiliency-in-autonomous-robots-technology-org/
Fruit flies could hold the key to building resiliency in autonomous robots - Technology Org
Tumblr media Tumblr media
Mechanical Engineering Assistant Professor Floris van Breugel has been awarded a $2 million National Science Foundation (NSF) grant to adapt autonomous robots to be as resilient as fruit flies.
Resiliency in autonomous robotic systems is crucial, especially for robotics systems used in disaster response and surveillance, such as drones monitoring wildfires. Unfortunately, modern robots have difficulty responding to new environments or damage to their bodies that might occur during disaster response, van Breugel wrote in his grant application. In contrast, living systems are remarkably adept at quickly adjusting their behavior to new situations thanks to redundancy and flexibility within their sensory and muscle control systems.
Scientific discoveries in fruit flies have helped shed light on how these insects achieve resiliency in flight, according to van Breugel. His project will translate that emerging knowledge on insect neuroscience to develop more resilient robotic systems.
“This is a highly competitive award on a topic with tremendous potential impact, which also speaks of the research excellence of the investigator and Mechanical Engineering at UNR,” Petros Voulgaris, Mechanical Engineering department chair, said.
This research aligns with the College of Engineering’s Unmanned Vehicles research pillar.
Engineering + flies
The intersection of engineering and flies long has been an interest to van Breugel.
“As an undergrad, I did research where my main project was designing a flying, hovering thing that birds or insects vaguely inspired,” he said. “Throughout that project, I realized that the hard part, which was more interesting to me, is once you have this mechanical thing that can fly, how do you control it? How do you make it go where you want it to go? If it gets broken, how do you adapt to that?”
Van Breugel says he is examining how “animals can repurpose or reprogram their sensorimotor systems ‘on the fly’ to compensate for internal damage or external perturbations quickly.”
Working with van Breugel on the grant are experts in insect neuroscience, including Michael Dickinson, professor of bioengineering and aeronautics at the California Institute of Technology (and van Breugel’s Ph.D. advisor) as well as Yvette Fisher, assistant professor of neurobiology at U.C. Berkeley. Both have pioneered aspects of brain imaging in flies in regards to the discoveries and technology in the field that van Breugel is utilizing in this research project. Also on the project: Bing Bruton, associate professor of biology at the University of Washington, who brings her expertise in computational neuroscience.
The importance of flies in the realm of both engineering and neuroscience stems from the combination of their sophisticated behavior together with brains that are numerically simple enough that they can be studied in detail. This “goldilocks” combination, van Bruegel said, makes it feasible to distill properties of their neural processing into fundamental engineering principles that can be applied to robotics systems. 
As part of the grant, research experiences will be offered to middle school, high school and undergraduate students to participate in both neuroscience and robotics research. Van Breugel and his team also will develop open-source content to help bring neuroscience fluency to engineering students. This aligns with the College of Engineering’s Student Engagement operational pillar.
Source: University of Nevada, Reno
You can offer your link to a page which is relevant to the topic of this post.
2 notes · View notes
jcmarchi · 6 months
Text
New AI noise-canceling headphone technology lets wearers pick which sounds they hear - Technology Org
New Post has been published on https://thedigitalinsider.com/new-ai-noise-canceling-headphone-technology-lets-wearers-pick-which-sounds-they-hear-technology-org/
New AI noise-canceling headphone technology lets wearers pick which sounds they hear - Technology Org
Most anyone who’s used noise-canceling headphones knows that hearing the right noise at the right time can be vital. Someone might want to erase car horns when working indoors but not when walking along busy streets. Yet people can’t choose what sounds their headphones cancel.
A team led by researchers at the University of Washington has developed deep-learning algorithms that let users pick which sounds filter through their headphones in real time. Pictured is co-author Malek Itani demonstrating the system. Image credit: University of Washington
Now, a team led by researchers at the University of Washington has developed deep-learning algorithms that let users pick which sounds filter through their headphones in real time. The team is calling the system “semantic hearing.” Headphones stream captured audio to a connected smartphone, which cancels all environmental sounds. Through voice commands or a smartphone app, headphone wearers can select which sounds they want to include from 20 classes, such as sirens, baby cries, speech, vacuum cleaners and bird chirps. Only the selected sounds will be played through the headphones.
The team presented its findings at UIST ’23 in San Francisco. In the future, the researchers plan to release a commercial version of the system.
[embedded content]
“Understanding what a bird sounds like and extracting it from all other sounds in an environment requires real-time intelligence that today’s noise canceling headphones haven’t achieved,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “The challenge is that the sounds headphone wearers hear need to sync with their visual senses. You can’t be hearing someone’s voice two seconds after they talk to you. This means the neural algorithms must process sounds in under a hundredth of a second.”
Because of this time crunch, the semantic hearing system must process sounds on a device such as a connected smartphone, instead of on more robust cloud servers. Additionally, because sounds from different directions arrive in people’s ears at different times, the system must preserve these delays and other spatial cues so people can still meaningfully perceive sounds in their environment.
Tested in environments such as offices, streets and parks, the system was able to extract sirens, bird chirps, alarms and other target sounds, while removing all other real-world noise. When 22 participants rated the system’s audio output for the target sound, they said that on average the quality improved compared to the original recording.
In some cases, the system struggled to distinguish between sounds that share many properties, such as vocal music and human speech. The researchers note that training the models on more real-world data might improve these outcomes.
Source: University of Washington
You can offer your link to a page which is relevant to the topic of this post.
2 notes · View notes
jcmarchi · 6 months
Text
7 Applications of AI Video in the Entertainment Industry - Technology Org
New Post has been published on https://thedigitalinsider.com/7-applications-of-ai-video-in-the-entertainment-industry-technology-org/
7 Applications of AI Video in the Entertainment Industry - Technology Org
Artificial Intelligence (AI) has revolutionized various industries, and the entertainment sector is no exception. In recent years, AI has found numerous applications in video production, enhancing creativity, efficiency, and audience engagement. From video editing to content recommendation, AI-driven technologies are transforming the way we create, distribute, and consume entertainment content. In this article, we will explore seven exciting applications of generative AI video in the entertainment industry.
Video editing. Image credit: DaleshTV via Wikimedia, CC-BY-SA-4.0
Video Editing and Post-Production
One of the most prominent applications of AI in the entertainment industry is in video editing and post-production. AI-powered editing tools can analyze video footage to automatically select the best shots, correct color and lighting, and even add special effects. For instance, Adobe’s Premiere Pro offers AI-powered features like Auto Reframe, which automatically adjusts the aspect ratio of videos for different platforms, saving creators valuable time.
Deepfake Technology
Deepfake technology, which uses AI algorithms to superimpose one person’s face onto another’s body, has garnered significant attention in the entertainment industry. While controversial, deepfakes have been used in movies and TV shows to recreate the likeness of actors who are no longer available or to de-age actors. This technology allows filmmakers to create realistic digital people and characters, opening up new creative possibilities.
Personalized Content Recommendation
AI-driven recommendation algorithms have become a staple in the entertainment industry, helping platforms like Netflix, Spotify, and YouTube suggest content tailored to individual preferences. These algorithms analyze users’ viewing habits, likes, and dislikes to recommend movies, TV shows, and music that align with their tastes. This personalized content recommendation not only enhances user experience but also keeps viewers engaged for longer periods, increasing platform revenue.
Virtual Production
AI video is also revolutionizing virtual production techniques. Filmmakers can now use AI-driven tools to create realistic virtual sets and environments. This technology allows for more cost-effective and efficient filmmaking, as it eliminates the need for physical sets and on-location shoots. Additionally, it enables real-time visualization of scenes, empowering directors and actors to make instant creative decisions.
Video Restoration and Enhancement
AI algorithms are invaluable in restoring and enhancing old or damaged video footage. Whether it’s restoring classic films or enhancing historical footage, AI-driven tools can significantly improve the quality of visuals and audio. This not only preserves valuable cultural artifacts but also offers a new way to experience old content with modern clarity and vibrancy.
Content Creation and Generation
AI-generated content is gaining traction in the entertainment industry. AI systems can analyze trends, generate scripts, and even compose music. OpenAI’s GPT-3, for example, has been used to write articles, stories, and dialogues for video games. While AI-generated content is not replacing human creativity, it can be a valuable tool for generating ideas and streamlining content creation processes.
Audience Engagement and Interaction
AI is enhancing audience engagement in various entertainment forms, including interactive videos, virtual reality experiences, and augmented reality games. Interactive storytelling powered by AI allows viewers to make choices that impact the narrative, creating personalized experiences. Virtual reality and augmented reality experiences are also made more immersive and interactive with AI, providing audiences with captivating and engaging entertainment options.
AI video technology is reshaping the entertainment industry in remarkable ways. From video editing and deepfake technology to personalized content recommendation and virtual production, AI is enhancing creativity and efficiency throughout the production process. Additionally, AI is contributing to the preservation and enhancement of historical content and generating new forms of creative expression. As AI continues to evolve, we can expect even more exciting applications in the entertainment industry, pushing the boundaries of what’s possible in the world of video and media. It’s an exciting time for both creators and audiences as AI-driven innovations continue to transform the entertainment landscape.
2 notes · View notes
jcmarchi · 15 days
Text
Self-driving taxis are coming to New York City, but will need safety drivers too - Technology Org
New Post has been published on https://thedigitalinsider.com/self-driving-taxis-are-coming-to-new-york-city-but-will-need-safety-drivers-too-technology-org/
Self-driving taxis are coming to New York City, but will need safety drivers too - Technology Org
New York City has unveiled a new plan for permitting companies to trial autonomous taxi vehicles on its streets, mandating the presence of a human safety driver at all times.
Times Square, New York, USA. Image credit: Vidar Nordli-Mathisen via Unsplash, free license
In a bid to proactively address concerns surrounding fully autonomous vehicles, the city has established what it terms as a “stringent permitting program.” This initiative aims to ensure that applicants are sufficiently prepared to test their technology in the complex urban landscape of New York City, prioritizing safety and proficiency.
Mayor Eric Adams emphasized the inevitability of autonomous technology’s integration into the city’s transportation system, emphasizing the need to implement it responsibly.
The criteria for obtaining permits would require prior experience in autonomous vehicle testing, with companies mandated to supply data from previous trials, including incident reports and the frequency of safety driver interventions, commonly referred to as “disengagements.”
A notable stipulation of the new regulations is the exclusion of fully driverless vehicles from testing on public roads within the city limits. Instead, only vehicles equipped with safety drivers will be eligible for testing permits.
Only a few select companies, such as Waymo and Cruise, have introduced driverless vehicles, categorized as Level 4 automation, into the market. However, challenges concerning traffic congestion and safety have hindered their widespread adoption.
In a notable incident last October, a driverless Cruise vehicle in San Francisco dragged a pedestrian for over 20 feet along the street, prompting authorities to suspend the company’s operational permit. Similarly, a few months later, a driverless Waymo vehicle was involved in a minor collision with a bicyclist. Officials in San Francisco criticized both companies for impeding traffic flow and obstructing emergency vehicles and buses.
To preempt such issues, New York City has proposed a mandate requiring companies to maintain safety drivers in their vehicles at all times. Under Mayor Adams’ proposal, companies would still need to secure a permit from the state Department of Motor Vehicles. Moreover, applicants would be obligated to furnish details on the recruitment and training procedures of their safety drivers and commit to adhering to the latest best practices outlined by the Society of Automotive Engineers.
According to a spokesperson, data derived from autonomous vehicle (AV) testing will eventually be accessible through the city’s Open Data portal. As part of the application procedure, the Department of Transportation will assess requests from applicants regarding the confidentiality of specific data that may be withheld from disclosure.
Written by Vytautas Valinskas
0 notes
jcmarchi · 15 days
Text
Amazon now offers a phone-based palm scanning service for sign-up purposes - Technology Org
New Post has been published on https://thedigitalinsider.com/amazon-now-offers-a-phone-based-palm-scanning-service-for-sign-up-purposes-technology-org/
Amazon now offers a phone-based palm scanning service for sign-up purposes - Technology Org
Amazon’s palm scanning service now offers the convenience of sign-up directly from your mobile device.
Palm features used in personal identification. Image credit: Amazon
Instead of requiring a visit to a physical location, users can now enroll in Amazon One by capturing images of their palm using the newly launched Amazon One app, available on both iOS and Android platforms. This streamlined process enables users to set up their accounts swiftly, facilitating the use of palm scanning for authentication purposes at supported locations.
Previously, Amazon One enrollment requited visiting designated physical sites, where users could link their palm print to their Amazon account for various purposes such as making purchases or age verification.
[embedded content]
Presently, this service is accessible at all Whole Foods stores across the US, select Panera Bread locations, and over 150 other venues, including stadiums, airports, fitness centers, and convenience stores.
Amazon One utilizes advanced generative AI technology to analyze the unique vein structure of the palm, generating a distinct numerical vector representation for identification during in-store palm scans. It’s noteworthy that Amazon does not utilize raw palm images for identification purposes.
On the mobile app, Amazon employs AI algorithms to compare the photo captured by the phone’s camera with the near-infrared imagery obtained from an Amazon One device. Users are required to integrate a payment method within the app and upload a photo of their identification for age verification purposes if desired. Additionally, the app allows for the linking of loyalty programs, season passes, and gym memberships.
While privacy concerns surrounding the technology persist, Amazon asserts that palm and vein images are promptly encrypted and transmitted to a highly secure section within the AWS Cloud, specifically designated for Amazon One. It is in this secure environment that Amazon creates the unique palm signature.
Furthermore, Amazon emphasizes that the new app incorporates additional layers of anti-spoofing measures, and it explicitly prohibits the saving or downloading of palm images to the user’s device. Nonetheless, some individuals may remain apprehensive about relinquishing their biometric data, considering the irreplaceable nature of palm prints compared to traditional passwords.
Written by Alius Noreika
0 notes
jcmarchi · 15 days
Text
AI and Data Privacy: How Lerna AI Offers Best of Both Worlds - Technology Org
New Post has been published on https://thedigitalinsider.com/ai-and-data-privacy-how-lerna-ai-offers-best-of-both-worlds-technology-org/
AI and Data Privacy: How Lerna AI Offers Best of Both Worlds - Technology Org
The breakneck speed with which Artificial intelligence (AI) is transforming our world has been nothing short of exciting. A report by KPMG Global Tech highlights AI and machine learning as the most crucial technologies for achieving short-term goals for tech leaders today.
Cybersecurity, data privacy – artistic interpretation. Image credit: Artem Bryzgalov via Unsplash, free license
However, similar to any emerging technology, inherent risks exist. One survey, encompassing over 17,000 individuals, found 61% of respondents expressed apprehension about trusting AI systems, with only half convinced that the benefits outweigh the potential risks. A major concern revolves around personal data privacy.
AI systems are data-hungry beasts. To learn and function effectively, they require massive datasets to train their algorithms and fuel performance. This data often includes personal information like names, addresses, financial details, and even sensitive data like medical records and social security numbers. The collection, processing, and storage of such data raise significant questions regarding its usage and vulnerability.
The Road to Data Privacy
The widespread and unregulated use of AI poses a significant threat to human rights and personal privacy. For example, generative AI (GenAI) uses powerful foundation models trained on massive volumes of unlabeled data— which may or may not take personal data privacy into account.
That’s one of the reasons why AI leaders have issued open letters advocating for a temporary pause in GenAI development. They urge policymakers to establish “guardrails” to ensure their responsible use in the future. In response, there have been increased government efforts to ensure that AI is not being used indiscriminately. For example, the EU’s GDPR (passed in 2018) and the AI Act are two separate legislations aiming to regulate how AI is being used to gather data within the European Union.
The reach of AI governance extends beyond developers and AI pioneers. Companies that integrate AI products and services into their operations hold significant responsibilities. These companies must prioritize ethical considerations when selecting and using AI tools.  For instance, ensuring AI doesn’t perpetuate biases present in training data is critical.  Additionally, companies must comply with relevant regulations, such as the EU AI Act if they operate within the European market.
As we navigate the complex intercourse between AI innovation and personal data privacy, it is incumbent upon all stakeholders— business leaders, policymakers, technologists, and consumers— to engage in a constructive dialogue aimed at forging a path forward that maximizes the benefits of AI while mitigating its potential risks. It is right at this intersection that companies like Lerna AI shine brightest.
Data Privacy the Lerna Way
In a post-cookies world where third-party data is dwindling, the imperative for businesses to leverage their first-party data potential has never been more critical. Lerna AI, a game-changer in mobile hyper-personalization recommender systems, offers a compelling solution to this pressing challenge. The innovative mobile SDK empowers apps to personalize content for each user, leveraging a combination of content metadata and on-device user data, including demographic and sensor data.
By training models on this rich trove of first-party data, Lerna AI enables apps to predict optimal content recommendations tailored to each user’s preferences and interests while preserving user privacy. In an era marked by heightened concerns over data privacy and security, Lerna AI’s approach is cold water over parched throats, ensuring businesses can reach their objectives without having to go outside the box of ethics.  As Lerna AI´s CTO, Georgios Kellaris, highlights, “thanks to advances in privacy-preserving technologies like federating learning and differential privacy, training AI models on sensitive data is now possible, allowing us to learn from richer than before data, while protecting user privacy.”
The model is also highly functional; in tests, the AI dusted double click-through rates and all of these happened without any infringement on user data or rights. The tailored approach that Lerna AI brings to the process helps users to stay longer on the app. Personal recommendation aligns with what the average mobile device wants. More than 56% of customers want personalized recommendations and user experiences with 62% of business leaders citing improved customer as a benefit of personalization efforts, according to a report by Segment.
Moving ahead, the convergence of AI and privacy will continue to shape the digital landscape, challenging conventional norms and necessitating adaptive regulatory frameworks. By fostering a culture of responsible AI development and promoting transparency, accountability, and user-centric design principles, we all can harness the transformative potential of AI while upholding the fundamental principles of privacy and individual rights.
0 notes
jcmarchi · 16 days
Text
Microsoft Aims to Protect Chatbots Against Users Who Trick Them - Technology Org
New Post has been published on https://thedigitalinsider.com/microsoft-aims-to-protect-chatbots-against-users-who-trick-them-technology-org/
Microsoft Aims to Protect Chatbots Against Users Who Trick Them - Technology Org
Microsoft Corporation is implementing measures to prevent users from manipulating artificial intelligence chatbots into performing unusual actions. The company, headquartered in Redmond, Washington, announced in a blog post on Thursday that new safety features are being integrated into Azure AI Studio, enabling developers to create customized AI assistants using their own datasets.
Chatbot robot. Image credit: James Royal-Lawson via Flickr, CC BY-SA 2.0
Among the tools being introduced are “prompt shields,” which aim to identify and block deliberate attempts—referred to as prompt injection attacks or jailbreaks—to induce unintended behavior in an AI model. Microsoft is also tackling “indirect prompt injections,” where hackers embed malicious instructions into the training data of a model, tricking it into executing unauthorized actions like stealing user data or seizing control of a system.
Sarah Bird, Microsoft’s chief product officer of responsible AI, described these attacks as “a unique challenge and threat.” She explained that the new defenses are designed to detect suspicious inputs in real-time and prevent their execution. Additionally, Microsoft is implementing a feature that notifies users when a model generates fictitious or erroneous responses.
The company is committed to enhancing trust in its generative AI tools, which are increasingly utilized by both consumers and corporate clients. In February, Microsoft investigated incidents involving its Copilot chatbot, which generated responses ranging from unconventional to harmful. Following the investigation, Microsoft concluded that users had intentionally attempted to manipulate Copilot into generating these responses.
According to Bird, the frequency of such incidents is expected to rise as the usage of these tools expands and awareness of different manipulation techniques grows. Warning signs of such attacks may include repetitive questioning of a chatbot or prompts involving role-playing scenarios.
As OpenAI’s largest investor, Microsoft has made its partnership with the organization a cornerstone of its AI strategy. Bird emphasized the joint commitment of Microsoft and OpenAI to the safe deployment of AI and the integration of safeguards into the large language models underpinning generative AI.
However, she cautioned that solely relying on the model is insufficient, citing jailbreaks as an inherent vulnerability of the technology.
Written by Alius Noreika
0 notes
jcmarchi · 16 days
Text
Israel now uses mass facial recognition in the Gaza Strip - Technology Org
New Post has been published on https://thedigitalinsider.com/israel-now-uses-mass-facial-recognition-in-the-gaza-strip-technology-org/
Israel now uses mass facial recognition in the Gaza Strip - Technology Org
Israel has discreetly initiated a widespread facial recognition program in the Gaza Strip, compiling a database of Palestinians without their awareness or authorization.
A video surveillance camera – illustrative photo. Image credit: Pawel Czerwinski via Unsplash, free license
As revealed by The New York Times, this initiative, developed after the October 7th incidents, utilizes technology from Google Photos along with a specialized tool from the Tel Aviv-based firm Corsight to detect individuals associated with Hamas.
The facial recognition program was established jointly with Israel’s military campaign in Gaza. Following the October 7th events, operatives from Israel’s Unit 8200, the primary intelligence unit of the Israeli Defense Forces, identified potential targets by scrutinizing security camera footage and content uploaded to social media by opposition groups. Additionally, soldiers solicited information from Palestinian detainees to identify individuals affiliated with the adversary.
Company Corsight, renowned for its technology’s capability to accurately recognize individuals with less than half of their face visible, utilized these images to develop a facial recognition tool for Israeli personnel operating in Gaza.
In order to expand the database and pinpoint potential targets, the Israeli military installed facial recognition cameras at checkpoints along major routes used by Palestinians to travel southward.
Soldiers recounted to the Times that Corsight’s technology sometimes yielded inaccurate results, especially when dealing with low-quality footage or obscured facial features. In certain instances, Corsight’s tool erroneously identified individuals as being linked to adversary groups.
In October, several hospitals in Israel began employing Corsight’s technology for patient identification, as reported by Forbes. Since then, Corsight’s technology demonstrated the capability to identify individuals “whose features had been impacted by physical trauma, and find a match amongst photos submitted by concerned family members.”
Corsight primarily targets governmental, law enforcement, and military applications. In 2020, the company, just one year old at the time, claimed its technology could identify faces even when masked. Two years later, Corsight purportedly embarked on developing a tool capable of constructing a person’s facial model based on their DNA. Last year, Corsight collaborated with the metropolitan police in Bogotá, Colombia, to locate suspects involved in murder and theft cases within the public transit system.
Written by Vytautas Valinskas
You can offer your link to a page which is relevant to the topic of this post.
1 note · View note
jcmarchi · 17 days
Text
How Has AI Transformed the OCR Landscape?
New Post has been published on https://thedigitalinsider.com/how-has-ai-transformed-the-ocr-landscape/
How Has AI Transformed the OCR Landscape?
OCR technology has proved to be a transformative solution for document editing and management. Whether it’s business, finance, education, or other sector, OCR has made its mark, simplifying content modification. However, traditional OCR methods carry certain limitations that raise a question mark on their capabilities. Factors like low-resolution images, complex fonts, and damaged texts highly impact their performance. Also, these solutions often fail to decode handwritten notes, leading to poor recognition outcomes.
As a response, AI-powered OCR solutions have emerged, offering exceptional precision and speed. Given this background, this article will delve into how AI has augmented the OCR process. Also, we will present the best AI-driven OCR PDF to Word converter to let you set a new definition for data digitization.
AI technology in OCR
AI technology has initiated a paradigm shift, promising to address and overcome the traditional constraints of OCR technology. As this emerging technology has made strides in other sectors, it has also impacted and improved the text recognition process. With the inception of artificial intelligence, OCR systems are not only recognizing text but have become more adept at grasping its nuances and context. Let’s explore how AI has influenced the OCR and charted the course for a smarter, more seamless future in data processing:
Improved Text Recognition Accuracy
AI-powered OCR systems use Deep Learning algorithms to analyze and spot text with remarkable precision. These algorithms are trained on vast datasets containing diverse text formats, which lets them recognize a wide array of fonts, styles, and even text superimposed on noisy backgrounds. This extensive drilling not only helps them identify characters and words but also understand the document context. This enhanced accuracy is crucial for various use cases, from automating data entry processes to making historical documents digitally accessible.
Handwriting Recognition
Handwriting recognition is one of the most significant advancements by AI in the OCR field. Traditional OCR systems often fail when faced with the handwritten text due to the wide variety in individual writing styles, strokes, and legibility. However, with the integration of AI and machine learning algorithms, OCR technology has significantly evolved.
AI-powered systems are trained on extensive datasets that include a diverse collection of handwriting samples. Through this excessive training, they come to identify patterns, nuances, and characteristics unique to each handwritten text.
Intelligent Character Prediction
Traditional OCR systems encountered difficulties when tasked with recognizing characters that were distorted, blurred, or poorly visible. However, AI-fuelled OCR technologies mark a significant leap in overcoming these challenges. These innovative systems contain advanced text correction algorithms leveraging deep learning and contextual analysis.
This approach enables them to make educated guesses about distorted, damaged, or partially obscured characters, intelligently predicting which character will be the most suitable in this context.
Expansive Language Support
The outdated OCR approaches possessed the capability of analyzing only the English text. This significantly posed trouble when it came to other languages, thus restricting the exploration of foreign content or datasets.
However, advanced OCR systems have developed a nuanced understanding of linguistic patterns, character shapes, and the specific features of various writing systems. Whether you are dealing with Chinese, Arabic, French, or any other language, these AI tools will enable quick text identification with minimal errors.
One-Click PDF Conversion
Modern OCR solutions have revolutionized document management by introducing the one-click PDF conversion feature. This feature allows you to transform PDF documents into editable Word formats swiftly, eliminating the need for complex conversion processes.
With just a single prompt, you can convert intricate reports, vital documents, and extensive research papers into Word documents. Modern OCR technology also ensures that the layout, formatting, and even the images of your document will be preserved.
Choosing an AI-powered OCR PDF to Word converter brings a suite of advantages. Let’s explore the key advantages of adopting an AI OCR tool:
Enhanced Accuracy
As discussed above, AI-powered OCR systems leverage advanced algorithms to improve the accuracy of text recognition. By accurately interpreting a wide range of fonts and handwriting styles, they reduce the chances of error and inefficiency.
Minimized Manual Data Handling
AI-driven document processing minimizes manual data handling. It cuts the costs otherwise spent on human labor and also speeds up workflows. Further, the AI-powered OCR tool allows for quicker decision-making and frees up staff for strategic work, boosting operational efficiency and data accuracy.
Multimodal OCR
An AI-driven OCR software can recognize and process text from a variety of sources, not just scanned documents. Whether it’s images, PDFs, audio, or even real-time video feeds, AI-enhanced OCR can recognize text, expanding the scope of document digitization.
Batch Processing
Most OCR tools, like Wondershare PDFelement, are capable of processing multiple documents in one go. Batch OCR is particularly useful for businesses dealing with large volumes of data. Such advanced functionalities optimize efficiency, allowing for the quick conversion of entire libraries of documents into editable formats.
If you are looking for a great OCR PDF tool, Wondershare PDFelement is a top choice. The platform stands as an all-inclusive PDF editor that houses powerful functionalities. At the heart of its impressive features is its OCR capability, transforming the text recognition experience.
Apart from this, PDFelement features an AI assistant, Lumi that interacts with your files. Its AI toolkit also includes the Rewrite, Proofread, AI-Detect, Summarize, and Translate features.
How to Convert OCR PDF to Word Using PDFelement?
With a single prompt, you can activate the Convert feature, streamlining the entire OCR PDF to Word conversion. Here is how to make it possible:
Step 1: Open the PDF file and navigate to the “AI Sidebar”. Choose “Chat with AI.”
Access chat with AI PDFelement
Step 2: Enter “How to convert PDF to Word?” in the chatbox.
Enter conversion prompt PDFelement
Step 3: PDFelement AI will provide the steps about converting PDF to Word in the software, together with the possible functionalities you may need. Click the “Convert PDF to Word” feature.
Access convert tab PDFelement
Step 4: In the popup Convert window, click the “Settings” button to set the OCR options.
Convert PDF to word settings
Step 5: After set the OCR option, click “OK” to start the process.
Ocr settings PDFelement
Traditional OCR approaches are effective for text recognition, but they are not without flaws. They often fail to capture text from low-quality images or decipher complex languages and handwriting. To fill such gaps, AI-powered OCR solutions offer a ground-breaking alternative.
In this article, we analyzed how modern OCR tools like PDFelement provide enhanced accuracy, security, and efficiency in document conversion. Embrace the future of document digitization with PDFelement’s AI-powered OCR technology.
0 notes
jcmarchi · 17 days
Text
Generative AI develops potential new drugs for antibiotic-resistant bacteria - Technology Org
New Post has been published on https://thedigitalinsider.com/generative-ai-develops-potential-new-drugs-for-antibiotic-resistant-bacteria-technology-org/
Generative AI develops potential new drugs for antibiotic-resistant bacteria - Technology Org
Tumblr media Tumblr media
With nearly 5 million deaths linked to antibiotic resistance globally every year, new ways to combat resistant bacterial strains are urgently needed.
Stanford Medicine and McMaster University researchers are tackling this problem with generative artificial intelligence. A new model, SyntheMol (for synthesizing molecules), created structures and chemical recipes for six novel drugs to kill resistant strains of Acinetobacter baumannii, one of the leading pathogens responsible for antibacterial resistance-related deaths.
The researchers described their model and experimental validation of these new compounds in a study published in Nature Machine Intelligence.
“There’s a huge public health need to develop new antibiotics quickly,” said James Zou, PhD, an associate professor of biomedical data science and co-senior author on the study. “We hypothesised that there are a lot of potential molecules out there that could be effective drugs, but we haven’t made or tested them yet. That’s why we wanted to use AI to design entirely new molecules that have never been seen in nature.”
Researchers had taken different computational approaches to antibiotic development before the advent of generative AI, the same type of artificial intelligence technology that underlies large language models like ChatGPT. They used algorithms to scroll through existing drug libraries, identifying those compounds most likely to act against a given pathogen. This technique, which sifted through 100 million known compounds, yielded results but just scratched the surface in finding all the chemical compounds that could have antibacterial properties.
“Chemical space is gigantic,” said Kyle Swanson, a Stanford computational science doctoral student and co-lead author on the study. “People have estimated that there are close to 1060 possible drug-like molecules. So, 100 million is nowhere close to covering that entire space.”
Hallucinating for drug development
Generative AI’s tendency to “hallucinate,” or make up responses out of whole cloth, could be a boon when it comes to drug discovery, but previous attempts to generate new drugs with this kind of AI resulted in compounds that would be impossible to make in the real world, Swanson said. The researchers needed to put guardrails around SyntheMol’s activity — namely, to ensure that any molecules the model dreamed up could be synthesized in a lab.
“We’ve approached this problem by trying to bridge that gap between computational work and wet lab validation,” Swanson said.
The model was trained to construct potential drugs using a library of more than 130,000 molecular building blocks and a set of validated chemical reactions. It generated the final compound and the steps it took with those building blocks, giving the researchers a set of recipes to produce the drugs.
The researchers also trained the model on existing data of different chemicals’ antibacterial activity against A. baumannii. With these guidelines and its building block starting set, SyntheMol generated around 25,000 possible antibiotics and the recipes to make them in less than nine hours. To prevent the bacteria from quickly developing resistance to the new compounds, researchers then filtered the generated compounds to only those that were dissimilar from existing compounds.
“Now we have not just entirely new molecules but also explicit instructions for how to make those molecules,” Zou said.
A new chemical space
The researchers chose the 70 compounds with the highest potential to kill the bacterium and worked with the Ukrainian chemical company Enamine to synthesize them. The company was able to efficiently generate 58 of these compounds, six of which killed a resistant strain of A. baumannii when researchers tested them in the lab. These new compounds also showed antibacterial activity against other kinds of infectious bacteria prone to antibiotic resistance, including E. coli, Klebsiella pneumoniae and MRSA.
The scientists were able to further test two of the six compounds for toxicity in mice, as the other four didn’t dissolve in water. The two they tested seemed safe; the next step is to test the drugs in mice infected with A. baumannii to see if they work in a living body, Zou said.
The six compounds are vastly different from each other and from existing antibiotics. The researchers don’t know how their antibacterial properties work at the molecular level, but exploring those details could yield general principles relevant to other antibiotic development.
“This AI is really designing and teaching us about this entirely new part of the chemical space that humans just haven’t explored before,” Zou said.
Zou and Swanson are also refining SyntheMol and broadening its reach. They’re collaborating with other research groups to use the model for drug discovery for heart disease and to create new fluorescent molecules for laboratory research.
Source: Stanford University
You can offer your link to a page which is relevant to the topic of this post.
0 notes
jcmarchi · 17 days
Text
Scientists Create Novel Technique to Form Human Artificial Chromosomes - Technology Org
New Post has been published on https://thedigitalinsider.com/scientists-create-novel-technique-to-form-human-artificial-chromosomes-technology-org/
Scientists Create Novel Technique to Form Human Artificial Chromosomes - Technology Org
Tumblr media Tumblr media
Human artificial chromosomes (HACs) capable of working within human cells could power advanced gene therapies, including those addressing some cancers, along with many laboratory applications, though serious technical obstacles have hindered their development. Now a team led by researchers at the Perelman School of Medicine at the University of Pennsylvania has made a significant breakthrough in this field that effectively bypasses a common stumbling block.
In a study published in Science, the researchers explained how they devised an efficient technique for making HACs from single, long constructs of designer DNA. Prior methods for making HACs have been limited by the fact that the DNA constructs used to make them tend to join together—“multimerize”—in unpredictably long series and with unpredictable rearrangements. The new method allows HACs to be crafted more quickly and precisely, which, in turn, will directly speed up the rate at which DNA research can be done. In time, with an effective delivery system, this technique could lead to better engineered cell therapies for diseases like cancer.
“Essentially, we did a complete overhaul of the old approach to HAC design and delivery,” said Ben Black, PhD, the Eldridge Reeves Johnson Foundation Professor of Biochemistry and Biophysics at Penn. “The HAC we built is very attractive for eventual deployment in biotechnology applications, for instance, where large scale genetic engineering of cells is desired. A bonus is that they exist alongside natural chromosomes without having to alter the natural chromosomes in the cell.”
The first HACs were developed 25 years ago, and artificial chromosome technology is already well advanced for the smaller, simpler chromosomes of lower organisms such as bacteria and yeast. Human chromosomes are another matter, due largely to their greater sizes and more complex centromeres, the central region where X-shaped chromosomes’ arms are joined. Researchers have been able to get small artificial human chromosomes to form from self-linking lengths of DNA added to cells, but these lengths of DNA multimerize with unpredictable organizations and copy numbers—complicating their therapeutic or scientific use—and the resulting HACs sometimes even end up incorporating bits of natural chromosomes from their host cells, making edits to them unreliable.
In their study, the Penn Medicine researchers devised improved HACs with multiple innovations: These included larger initial DNA constructs containing larger and more complex centromeres, which allow HACs to form from single copies of these constructs. For delivery to cells, they used a yeast-cell-based delivery system capable of carrying larger cargoes.
“Instead of trying to inhibit multimerization, for example, we just bypassed the problem by increasing the size of the input DNA construct so that it naturally tended to remain in predictable single-copy form,” said Black.
The researchers showed that their method was much more efficient at forming viable HACs compared to standard methods, and yielded HACs that could reproduce themselves during cell division.
The potential advantages of artificial chromosomes—assuming they can be delivered easily to cells and operate like natural chromosomes—are many. They would offer safer, more productive, and more durable platforms for expressing therapeutic genes, in contrast to virus-based gene-delivery systems which can trigger immune reactions and involve harmful viral insertion into natural chromosomes. Normal gene expression in cells also requires many local and distant regulatory factors, which are virtually impossible to reproduce artificially outside of a chromosome-like context. Moreover, artificial chromosomes, unlike relatively cramped viral vectors, would permit the expression of large, cooperative ensembles of genes, for example to construct complex protein machines.
Black expects that the same broad approach his group took in this study will be useful in making artificial chromosomes for other higher organisms, including plants for agricultural applications such as pest-resistant, high-yield crops.
Source: University of Pennsylvania
You can offer your link to a page which is relevant to the topic of this post.
0 notes
jcmarchi · 18 days
Text
Stop out-of-control AI and focus on people - Technology Org
New Post has been published on https://thedigitalinsider.com/stop-out-of-control-ai-and-focus-on-people-technology-org/
Stop out-of-control AI and focus on people - Technology Org
Tumblr media Tumblr media
Companies need to stop designing new artificial intelligence technology just because they can, and people need to stop adapting their practices, habits and laws to fit the new technology. Instead, AI should be designed to fit exactly what people actually need.
That’s the view of 50 global experts who’ve contributed research papers to Human-Centred AI, a new book co-edited by two Université de Montréal experts that explores the risks — and missed opportunities — of the status quo and how it can be made better.
One important way would be through legal mechanisms, now woefully inadequate to the task, said contributor Pierre Larouche, an UdeM law professor and faculty vice-dean who specializes in competition law.
Treating AI as “a standalone object of law and regulation” and assuming that there is “no law currently applicable to AI” has left some policymakers feeling inadequate to an insurmountable task, said Larouche.
“Despite the scarcity – if not outright absence – of specific rules concerning AI as such, there is no shortage of laws that can be applied to AI, because of its embeddedness in social and economic relationships,” he said.
The challenge is not to create new legislation but to extend and apply existing laws to AI, he argued. That way, policymakers won’t fall into the trap of “delaying tactics designed to extend discussion indefinitely, while the technology continues to progress at a fast pace.”
Montreal lawyer Benjamin Prud’homme, vice-president of policy, society and global affairs at the UdeM-affiliated Mila (Quebec Artificial Intelligence Institute), one of the largest academic communities dedicated to AI, agrees.
He urges policymakers to “start moving away from the dichotomy between innovation and regulation (and) that we acknowledge it might be okay to stifle innovation if that innovation is irresponsible.”
Prud’homme cited the European Union as an example of being proactive in this regard via its “very ambitious AI Act, the first systemic law on AI, (which) should be definitively approved in the next few months.”
Co-edited by UdeM professor and health law expert Catherine Régis and UdeM public-health expert Jean-Louis Denis, along with the University of Cambridge’s Maria Luciana Axente and Osaka University’s Atsuo Kishimoto, Human-Centred AI brings together specialists in disciplines ranging from education to management to political science.
The book examines AI technologies in a number of contexts – including agriculture, workplace environments, healthcare, criminal justice and higher education – and offers people-focused approaches to regulation and interdisciplinary ways of working together to make AI less exclusive of human needs.
University of Edinburgh philosophy professor Shannon Vallor points to increasingly popular generative AI as an example of technology which is not human-centred. She argues the technology was created by organizations simply wanting to see how powerful they can make a system, rather than making “something designed by us, for us, and to benefit us.”
Other contributors to the new book look at how AI is impacting human behaviour (via Google, Facebook and other platforms), how AI lacks data on minorities and hence helps marginalize them, and how AI undermines privacy as people ignore how their information is collected and stored.
Source: University of Montreal
You can offer your link to a page which is relevant to the topic of this post.
0 notes