Tumgik
#ai & big data expo
jcmarchi · 2 months
Text
UK and France to collaborate on AI following Horizon membership
New Post has been published on https://thedigitalinsider.com/uk-and-france-to-collaborate-on-ai-following-horizon-membership/
UK and France to collaborate on AI following Horizon membership
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
The UK and France have announced new funding initiatives and partnerships aimed at advancing global AI safety. The developments come in the wake of the UK’s association with Horizon Europe, a move that was broadly seen as putting the divisions of Brexit in the past and the repairing of relations for the good of the continent.
French Minister for Higher Education and Research, Sylvie Retailleau, is scheduled to meet with UK Secretary of State Michelle Donelan in London today for discussions marking a pivotal moment in bilateral scientific cooperation.
Building upon a rich history of collaboration that has yielded groundbreaking innovations such as the Concorde and the Channel Tunnel, the ministers will endorse a joint declaration aimed at deepening research ties between the two nations. This includes a commitment of £800,000 in new funding towards joint research efforts, particularly within the framework of Horizon Europe.
A landmark partnership between the UK’s AI Safety Institute and France’s Inria will also be unveiled, signifying a shared commitment to the responsible development of AI technology. This collaboration is timely, given France’s upcoming hosting of the AI Safety Summit later this year—which aims to build upon previous agreements and discussions on frontier AI testing achieved during the UK edition last year.
Furthermore, the establishment of the French-British joint committee on Science, Technology, and Innovation represents an opportunity to foster cooperation across a range of fields, including low-carbon hydrogen, space observation, AI, and research security.
UK Secretary of State Michelle Donelan said:
“The links between the UK and France’s brightest minds are deep and longstanding, from breakthroughs in aerospace to tackling climate change. It is only right that we support our innovators, to unleash the power of their ideas to create jobs and grow businesses in concert with our closest neighbour on the continent.
Research is fundamentally collaborative, and alongside our bespoke deal on Horizon Europe, this deepening partnership with France – along with our joint work on AI safety – is another key step in realising the UK’s science superpower ambitions.”
The collaboration between the UK and France underscores their shared commitment to advancing scientific research and innovation, with a focus on emerging technologies such as AI and quantum.
Sylvie Retailleau, French Minister of Higher Education and Research, commented:
“This joint committee is a perfect illustration of the international component of research – from identifying key priorities such as hydrogen, AI, space and research security – to enabling collaborative work and exchange of ideas and good practices through funding.
Doing so with a trusted partner as the UK – who just associated to Horizon Europe – is a great opportunity to strengthen France’s science capabilities abroad, and participate in Europe’s strategic autonomy openness.”
As the UK continues to deepen its engagement with global partners in the field of science and technology, these bilateral agreements serve as a testament to its ambition to lead the way in scientific discovery and innovation on the world stage.
(Photo by Aleks Marinkovic on Unsplash)
See also: UK Home Secretary sounds alarm over deepfakes ahead of elections
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai safety summit, artificial intelligence, europe, france, government, horizon europe, research, safety, uk
2 notes · View notes
syndicode · 5 years
Text
4 Reasons to Attend AI Conference Kyiv
Real case studies and productive networking are the two main among 4 Reasons to Attend AI Conference Kyiv next week! Earlier we made an announcement for this long-awaited event. The next week, on June 4, Smile-Expo will host large-scale AI Conference Kyiv. The event will focus on the use of ...
0 notes
hodldrgn-blog · 6 years
Photo
Tumblr media
New Post has been published on https://cryptomoonity.com/interview-with-travelbybit-ceo-on-binance-tie-up/
Interview with TravelbyBit CEO on Binance Tie-up
Interview with TravelbyBit CEO on Binance Tie-up
TravelbyBit recently piloted the use of Binance Coin (BNB) on one of its merchants. We sat down with TravelbyBit CEO Caleb Yeoh to dive deeper into our partnership.
Just weeks after Binance announced an investment in tourism-focused crypto payments company TravelbyBit, the latter introduced support for Binance Coin (BNB), starting with a bar in Australia. Right now, TravelbyBit is integrating BNB across all its merchants, with completion soon to be announced. This is a major development for BNB’s utility, delivered by the company behind Brisbane Airport’s transformation into the world’s first cryptocurrency-ready airport.
youtube
At the forefront of making cryptocurrencies usable for worldwide travel is Caleb Yeoh, CEO of TravelbyBit. An avid kitesurfer who sees blockchain and cryptocurrency as the antidotes to financial hassles related to traveling, Caleb has already driven greater crypto adoption, with hundreds of merchants in Australia now accepting payments in cryptocurrency, thanks to TravelbyBit.
Here are some highlights from our interview with Caleb.
How was your trip to Malta earlier this month? What sort of conversation did you have with CZ and the rest of the team?
Malta is truly the blockchain island. The energy and excitement there was amazing. The reception CZ and Binance had from the regulators and government folk was what blew me away. The country is truly grateful for the help Binance provided in shaping the blockchain island initiative.
The one thing that really struck me was CZ’s message: “In an industry where some have called the Wild Wild West, in the absence of rules, you can still act ethically.” Binance has always focused on doing the right thing by its users and the ecosystem.
Please walk us through the origins of TravelbyBit. How did it start? I heard that some kitesurfing is involved.
As an avid kitesurfer, I’ve traveled to destinations all around the globe, hunting for the best wind and kiting conditions. I spent a lot of transit time at airports and the exchange rate robbery that happens in these locations is shocking. The fees and inconvenience in dealing with multiple currencies when in transit is a nightmare. Imagine if you can travel all around the world with one truly global currency with no fees. That’s what TravelbyBit and Binance are working towards — digital currency for global payments.
TravelbyBit CEO Caleb Yeoh during one of his kitesurfing trips (left) and shaking hands with Binance CEO Changpeng Zhao (right)
We admire your significant role in transforming Brisbane Airport into a crypto-friendly airport. What went into ushering in that change?
Working with mainstream businesses is extremely hard. The deal with the airport took over six months of discussions and planning. The truth is we are only at the beginning of this movement and many large organizations are very risk-averse and prefer to take a wait-and-see approach. Only the most innovative ones take a bold step forward and lead the way. I must say Brisbane Airport Corporation and the Cater Care group at the airport are truly innovators in this space. There are folks in those organizations who are brave enough to try something new.
What factors convince merchants to accept crypto payments?
To be frank, we had a compelling offer: Minimal setup fees, no ongoing merchant fees to the user or to the merchant. We don’t intend to charge any fees to the merchant or to the consumer. Also, you will be surprised how many people genuinely support us because of the philosophy behind crypto, which is centered around freedom and liberty.
What are your expectations from the new partnership with Binance?
I think TravelbyBit and Binance are a good fit. Our companies want to see blockchain and digital currency adoption grow around the globe. Binance funding has enabled us to focus on building cool stuff for the community, without going to traditional VCs which would otherwise have set us on a different path. Crypto is a social movement and Binance has allowed us to keep TravelbyBit focused on the community.
I believe that protecting privacy is the key to a free and democratic society, and CZ believes increasing freedom makes the world a better place. This brings our organizations into close alignment.
In the next 12 months, what’s your projection of TravelbyBit’s reach and operations?
We are in talks with the most innovative retailers, airline lounges and airports around the world to roll out our blockchain payment system. Binance has over 10 million users now. We can channel those users to the retailers we work with at the airports.
Where can we see the TravelbyBit robot next?
We are working on a really cool project which I can share later down the track. All I’ll say is you will see more of the robot soon.
body[data-twttr-rendered="true"] background-color: transparent;.twitter-tweet margin: auto !important;
Thanks @TravelbyBit for increasing #BNB adoption. @binance https://t.co/KfGWk2cDEO
 — @cz_binance
function notifyResize(height) height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) donkey.resize(height); resized = true;if (parent && parent._resizeIframe) var obj = iframe: window.frameElement, height: height; parent._resizeIframe(obj); resized = true;if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) window.parent.postMessage(sentinel: "amp", type: "embed-size", height: height, "*");if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) window.webkit.messageHandlers.resize.postMessage(height); resized = true;return resized;twttr.events.bind('rendered', function (event) notifyResize();); twttr.events.bind('resize', function (event) notifyResize(););if (parent && parent._resizeIframe) var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) window.frameElement.setAttribute("width", "500");
Interview with TravelbyBit CEO on Binance Tie-up was originally published in Binance Exchange on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source
Related
Nebula-AI Weekly Update, NBAI Listed on a Leading ...
Ethereum Booth at the Bitcoin Expo 2014
Bezant Giveaway! – 180K worth of Tokens from...
Can The IOTA Hub Solve IOTA’s Big Issues? ...
.yuzo_related_post imgwidth:155px !important; height:145px !important; .yuzo_related_post .relatedthumbline-height:15px;background: !important;color:!important; .yuzo_related_post .relatedthumb:hoverbackground:#fcfcf4 !important; -webkit-transition: background 0.2s linear; -moz-transition: background 0.2s linear; -o-transition: background 0.2s linear; transition: background 0.2s linear;;color:!important; .yuzo_related_post .relatedthumb acolor:!important; .yuzo_related_post .relatedthumb a:hover color:!important;} .yuzo_related_post .relatedthumb:hover a color:!important; .yuzo_related_post .yuzo_text color:!important; .yuzo_related_post .relatedthumb:hover .yuzo_text color:!important; .yuzo_related_post .relatedthumb margin: 0px 0px 0px 0px; padding: 5px 5px 5px 5px; jQuery(document).ready(function( $ ) //jQuery('.yuzo_related_post').equalizer( overflow : 'relatedthumb' ); jQuery('.yuzo_related_post .yuzo_wraps').equalizer( columns : '> div' ); )
0 notes
jcmarchi · 2 months
Text
UK Home Secretary sounds alarm over deepfakes ahead of elections
New Post has been published on https://thedigitalinsider.com/uk-home-secretary-sounds-alarm-over-deepfakes-ahead-of-elections/
UK Home Secretary sounds alarm over deepfakes ahead of elections
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies. 
Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK but globally. He warned that malign actors working on behalf of nations like Russia and Iran could generate thousands of highly realistic deepfake images and videos to disrupt the democratic process.
“Increasingly today the battle of ideas and policies takes place in the ever-changing and expanding digital sphere,” Cleverly told the newspaper. “The era of deepfake and AI-generated content to mislead and disrupt is already in play.”
The Home Secretary plans to urge collective action from Silicon Valley giants like Google, Meta, Apple, and YouTube when he meets with them this week. His aim is to implement “rules, transparency, and safeguards” to protect democracy from deepfake disinformation.
Cleverly’s warnings come after a series of deepfake audios imitating Labour leader Keir Starmer and London Mayor Sadiq Khan circulated online last year. Fake BBC News videos purporting to examine PM Rishi Sunak’s finances have also surfaced.
The tech meetings follow a recent pact signed by major AI companies like Adobe, Amazon, Google, and Microsoft during the Munich Security Conference to take “reasonable precautions” against disruptions caused by deepfake content during elections worldwide.
As concerns over the proliferation of deepfakes continue to grow, the world must confront the challenges they pose in shaping public discourse and potentially influencing electoral outcomes.
(Image Credit: Lauren Hurley / No 10 Downing Street under OGL 3 license)
See also: Stability AI previews Stable Diffusion 3 text-to-image model
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, deepfakes, democracy, disinformation, elections, ethics, government, home secretary, james cleverly, misinformation, Society, uk, uk election, usa election, vote, voting
2 notes · View notes
jcmarchi · 2 days
Text
Igor Jablokov, Pryon: Building a responsible AI future
New Post has been published on https://thedigitalinsider.com/igor-jablokov-pryon-building-a-responsible-ai-future/
Igor Jablokov, Pryon: Building a responsible AI future
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus.
In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on.
Critical ethical challenges in AI
“There’s not one, maybe there’s almost 20 plus of them,” Jablokov stated when asked about the most critical ethical challenges. He outlined a litany of potential pitfalls that must be carefully navigated—from AI hallucinations and emissions of falsehoods, to data privacy violations and intellectual property leaks from training on proprietary information.
Bias and adversarial content seeping into training data is another major worry, according to Jablokov. Security vulnerabilities like embedded agents and prompt injection attacks also rank highly on his list of concerns, as well as the extreme energy consumption and climate impact of large language models.
Pryon’s origins can be traced back to the earliest stirrings of modern AI over two decades ago. Jablokov previously led an advanced AI team at IBM where they designed a primitive version of what would later become Watson. “They didn’t greenlight it. And so, in my frustration, I departed, stood up our last company,” he recounted. That company, also called Pryon at the time, went on to become Amazon’s first AI-related acquisition, birthing what’s now Alexa.
The current incarnation of Pryon has aimed to confront AI’s ethical quandaries through responsible design focused on critical infrastructure and high-stakes use cases. “[We wanted to] create something purposely hardened for more critical infrastructure, essential workers, and more serious pursuits,” Jablokov explained.
A key element is offering enterprises flexibility and control over their data environments. “We give them choices in terms of how they’re consuming their platforms…from multi-tenant public cloud, to private cloud, to on-premises,” Jablokov said. This allows organisations to ring-fence highly sensitive data behind their own firewalls when needed.
Pryon also emphasises explainable AI and verifiable attribution of knowledge sources. “When our platform reveals an answer, you can tap it, and it always goes to the underlying page and highlights exactly where it learned a piece of information from,” Jablokov described. This allows human validation of the knowledge provenance.
In some realms like energy, manufacturing, and healthcare, Pryon has implemented human-in-the-loop oversight before AI-generated guidance goes to frontline workers. Jablokov pointed to one example where “supervisors can double-check the outcomes and essentially give it a badge of approval” before information reaches technicians.
Ensuring responsible AI development
Jablokov strongly advocates for new regulatory frameworks to ensure responsible AI development and deployment. While welcoming the White House’s recent executive order as a start, he expressed concerns about risks around generative AI like hallucinations, static training data, data leakage vulnerabilities, lack of access controls, copyright issues, and more.  
Pryon has been actively involved in these regulatory discussions. “We’re back-channelling to a mess of government agencies,” Jablokov said. “We’re taking an active hand in terms of contributing our perspectives on the regulatory environment as it rolls out…We’re showing up by expressing some of the risks associated with generative AI usage.”
On the potential for an uncontrolled, existential “AI risk” – as has been warned about by some AI leaders – Jablokov struck a relatively sanguine tone about Pryon’s governed approach: “We’ve always worked towards verifiable attribution…extracting out of enterprises’ own content so that they understand where the solutions are coming from, and then they decide whether they make a decision with it or not.”
The CEO firmly distanced Pryon’s mission from the emerging crop of open-ended conversational AI assistants, some of which have raised controversy around hallucinations and lacking ethical constraints.
“We’re not a clown college. Our stuff is designed to go into some of the more serious environments on planet Earth,” Jablokov stated bluntly. “I think none of you would feel comfortable ending up in an emergency room and having the medical practitioners there typing in queries into a ChatGPT, a Bing, a Bard…”
He emphasised the importance of subject matter expertise and emotional intelligence when it comes to high-stakes, real-world decision-making. “You want somebody that has hopefully many years of experience treating things similar to the ailment that you’re currently undergoing. And guess what? You like the fact that there is an emotional quality that they care about getting you better as well.”
At the upcoming AI & Big Data Expo, Pryon will unveil new enterprise use cases showcasing its platform across industries like energy, semiconductors, pharmaceuticals, and government. Jablokov teased that they will also reveal “different ways to consume the Pryon platform” beyond the end-to-end enterprise offering, including potentially lower-level access for developers.
As AI’s domain rapidly expands from narrow applications to more general capabilities, addressing the ethical risks will become only more critical. Pryon’s sustained focus on governance, verifiable knowledge sources, human oversight, and collaboration with regulators could offer a template for more responsible AI development across industries.
You can watch our full interview with Igor Jablokov below:
[embedded content]
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, ai & big data expo, ai and big data expo, artificial intelligence, ethics, hallucinations, igor jablokov, regulation, responsible ai, security, TechEx
0 notes
jcmarchi · 3 days
Text
Microsoft unveils Phi-3 family of compact language models
New Post has been published on https://thedigitalinsider.com/microsoft-unveils-phi-3-family-of-compact-language-models/
Microsoft unveils Phi-3 family of compact language models
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Microsoft has announced the Phi-3 family of open small language models (SLMs), touting them as the most capable and cost-effective of their size available. The innovative training approach developed by Microsoft researchers has allowed the Phi-3 models to outperform larger models on language, coding, and math benchmarks.
“What we’re going to start to see is not a shift from large to small, but a shift from a singular category of models to a portfolio of models where customers get the ability to make a decision on what is the best model for their scenario,” said Sonali Yadav, Principal Product Manager for Generative AI at Microsoft.
The first Phi-3 model, Phi-3-mini at 3.8 billion parameters, is now publicly available in Azure AI Model Catalog, Hugging Face, Ollama, and as an NVIDIA NIM microservice. Despite its compact size, Phi-3-mini outperforms models twice its size. Additional Phi-3 models like Phi-3-small (7B parameters) and Phi-3-medium (14B parameters) will follow soon.
“Some customers may only need small models, some will need big models and many are going to want to combine both in a variety of ways,” said Luis Vargas, Microsoft VP of AI.
The key advantage of SLMs is their smaller size enabling on-device deployment for low-latency AI experiences without network connectivity. Potential use cases include smart sensors, cameras, farming equipment, and more. Privacy is another benefit by keeping data on the device.
(Credit: Microsoft)
Large language models (LLMs) excel at complex reasoning over vast datasets—strengths suited to applications like drug discovery by understanding interactions across scientific literature. However, SLMs offer a compelling alternative for simpler query answering, summarisation, content generation, and the like.
“Rather than chasing ever-larger models, Microsoft is developing tools with more carefully curated data and specialised training,” commented Victor Botev, CTO and Co-Founder of Iris.ai.
“This allows for improved performance and reasoning abilities without the massive computational costs of models with trillions of parameters. Fulfilling this promise would mean tearing down a huge adoption barrier for businesses looking for AI solutions.”
Breakthrough training technique
What enabled Microsoft’s SLM quality leap was an innovative data filtering and generation approach inspired by bedtime story books.
“Instead of training on just raw web data, why don’t you look for data which is of extremely high quality?” asked Sebastien Bubeck, Microsoft VP leading SLM research.  
Ronen Eldan’s nightly reading routine with his daughter sparked the idea to generate a ‘TinyStories’ dataset of millions of simple narratives created by prompting a large model with combinations of words a 4-year-old would know. Remarkably, a 10M parameter model trained on TinyStories could generate fluent stories with perfect grammar.
Building on that early success, the team procured high-quality web data vetted for educational value to create the ‘CodeTextbook’ dataset. This was synthesised through rounds of prompting, generation, and filtering by both humans and large AI models.
“A lot of care goes into producing these synthetic data,” Bubeck said. “We don’t take everything that we produce.”
The high-quality training data proved transformative. “Because it’s reading from textbook-like material…you make the task of the language model to read and understand this material much easier,” Bubeck explained.
Mitigating AI safety risks
Despite the thoughtful data curation, Microsoft emphasises applying additional safety practices to the Phi-3 release mirroring its standard processes for all generative AI models.
“As with all generative AI model releases, Microsoft’s product and responsible AI teams used a multi-layered approach to manage and mitigate risks in developing Phi-3 models,” a blog post stated.  
This included further training examples to reinforce expected behaviours, assessments to identify vulnerabilities through red-teaming, and offering Azure AI tools for customers to build trustworthy applications atop Phi-3.
(Photo by Tadas Sar)
See also: Microsoft to forge AI partnerships with South Korean tech leaders
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, language models, microsoft, open source, phi-3, small language models
0 notes
jcmarchi · 5 days
Text
Microsoft to forge AI partnerships with South Korean tech leaders
New Post has been published on https://thedigitalinsider.com/microsoft-to-forge-ai-partnerships-with-south-korean-tech-leaders/
Microsoft to forge AI partnerships with South Korean tech leaders
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Microsoft is set to host top executives from South Korea’s leading technology firms next month to strengthen its AI partnerships.
The high-level meeting, dubbed the MS CEO Summit 2024, will be held on 14 May 2024 and feature Microsoft’s founder Bill Gates and Chairman and CEO Satya Nadella. They will engage in closed-door discussions with Kyung Kye-hyun of Samsung, Kwak Noh-jung of SK Hynix, Cho Joo-wan of LG Electronics, and Ryu Young-sang of SK Telecom.
Sources for The Korea Economic Daily suggest that Microsoft plans to explore joint ventures in AI technology across various sectors. Discussions with Samsung and SK Hynix will likely centre on the joint development and supply of AI chips.
Samsung and SK Hynix are recognised as being among the world’s leading memory chipmakers and can enhance Microsoft’s server capabilities with next-generation technologies such as High-Bandwidth Memory (HBM) AI chips and solid-state drives (SSDs).
Collaboration topics with LG Electronics will include integrating AI technologies into home appliances, a move that will boost Microsoft’s competitive edge against rivals like Google and Meta. With SK Telecom, Microsoft is expected to delve further into cloud and 5G services.
These meetings are timely, as the global tech landscape sees an increased focus on AI development. By potentially integrating Microsoft’s AI services into products like Samsung’s smartphones and LG’s home appliances, Microsoft could significantly elevate its market standing.
Kyung of Samsung’s Device Solutions indicated last month that their new AI accelerators, Mach-1 and Mach-2, will soon move into mass production. These accelerators are designed to optimise the synergy between GPUs and HBM chips, promising a revolution in processing speeds. Earlier this month, the company unveiled the industry’s first LPDDR5X DRAM which aims to boost on-device AI.
SK Telecom, under CEO Ryu, spearheads the Global Telco AI Alliance (GTAA). This consortium, including major global players like Deutsche Telekom and SingTel, aims to develop AI infrastructure and generative AI services across a customer base exceeding 1.3 billion globally.
Last year, SK Telecom invested $100 million in AI startup Anthropic to develop a large language model (LLM) specifically for telcos. The collaborative endeavour extends to the Telco AI Platform, an ongoing project initiated by the GTAA.
The MS CEO Summit 2024 presents an opportunity for enhanced AI cooperation and technological advancement, securing Microsoft’s position as a pivotal player in the industry.
(Photo by Natalie Pedigo)
See also: Meta raises the bar with open source Llama 3 LLM
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: 5g, ai, artificial intelligence, ceo summit, cloud, lg, microsoft, ms ceo summit, Samsung, sk hynix, sk telecom, south korea, telecoms
0 notes
jcmarchi · 8 days
Text
Meta raises the bar with open source Llama 3 LLM
New Post has been published on https://thedigitalinsider.com/meta-raises-the-bar-with-open-source-llama-3-llm/
Meta raises the bar with open source Llama 3 LLM
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Meta has introduced Llama 3, the next generation of its state-of-the-art open source large language model (LLM). The tech giant claims Llama 3 establishes new performance benchmarks, surpassing previous industry-leading models like GPT-3.5 in real-world scenarios.
“With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today,” said Meta in a blog post announcing the release.
The initial Llama 3 models being opened up are 8 billion and 70 billion parameter versions. Meta says its teams are still training larger 400 billion+ parameter models which will be released over the coming months, alongside research papers detailing the work.
Llama 3 has been over two years in the making with significant resources dedicated to assembling high-quality training data, scaling up distributed training, optimising the model architecture, and innovative approaches to instruction fine-tuning.
Meta’s 70 billion parameter instruction fine-tuned model outperformed GPT-3.5, Claude, and other LLMs of comparable scale in human evaluations across 12 key usage scenarios like coding, reasoning, and creative writing. The company’s 8 billion parameter pretrained model also sets new benchmarks on popular LLM evaluation tasks:
“We believe these are the best open source models of their class, period,” stated Meta.
The tech giant is releasing the models via an “open by default” approach to further an open ecosystem around AI development. Llama 3 will be available across all major cloud providers, model hosts, hardware manufacturers, and AI platforms.
Victor Botev, CTO and co-founder of Iris.ai, said: “With the global shift towards AI regulation, the launch of Meta’s Llama 3 model is notable. By embracing transparency through open-sourcing, Meta aligns with the growing emphasis on responsible AI practices and ethical development.
”Moreover, this grants the opportunity for wider community education as open models facilitate insights into development and the ability to scrutinise various approaches, with this transparency feeding back into the drafting and enforcement of regulation.”
Accompanying Meta’s latest models is an updated suite of AI safety tools, including the second iterations of Llama Guard for classifying risks and CyberSec Eval for assessing potential misuse. A new component called Code Shield has also been introduced to filter insecure code suggestions at inference time.
“However, it’s important to maintain perspective – a model simply being open-source does not automatically equate to ethical AI,” Botev continued. “Addressing AI’s challenges requires a comprehensive approach to tackling issues like data privacy, algorithmic bias, and societal impacts – all key focuses of emerging AI regulations worldwide.
”While open initiatives like Llama 3 promote scrutiny and collaboration, their true impact hinges on a holistic approach to AI governance compliance and embedding ethics into AI systems’ lifecycles. Meta’s continuing efforts with the Llama model is a step in the right direction, but ethical AI demands sustained commitment from all stakeholders.”
Meta says it has adopted a “system-level approach” to responsible AI development and deployment with Llama 3. While the models have undergone extensive safety testing, the company emphasises that developers should implement their own input/output filtering in line with their application’s requirements.
The company’s end-user product integrating Llama 3 is Meta AI, which Meta claims is now the world’s leading AI assistant thanks to the new models. Users can access Meta AI via Facebook, Instagram, WhatsApp, Messenger and the web for productivity, learning, creativity, and general queries.  
Multimodal versions of Meta AI integrating vision capabilities are on the way, with an early preview coming to Meta’s Ray-Ban smart glasses.
Despite the considerable achievements of Llama 3, some in the AI field have expressed scepticism over Meta’s motivation being an open approach “for the good of society.” 
However, just a day after Mistral AI set a new benchmark for open source models with Mixtral 8x22B, Meta’s release does once again raise the bar for openly-available LLMs.
See also: SAS aims to make AI accessible regardless of skill set with packaged AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, large language model, llama 3, llm, meta, open source
0 notes
jcmarchi · 9 days
Text
Mixtral 8x22B sets new benchmark for open models
New Post has been published on https://thedigitalinsider.com/mixtral-8x22b-sets-new-benchmark-for-open-models/
Mixtral 8x22B sets new benchmark for open models
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Mistral AI has released Mixtral 8x22B, which sets a new benchmark for open source models in performance and efficiency. The model boasts robust multilingual capabilities and superior mathematical and coding prowess.
Mixtral 8x22B operates as a Sparse Mixture-of-Experts (SMoE) model, utilising just 39 billion of its 141 billion parameters when active.
Beyond its efficiency, the Mixtral 8x22B boasts fluency in multiple major languages including English, French, Italian, German, and Spanish. Its adeptness extends into technical domains with strong mathematical and coding capabilities. Notably, the model supports native function calling paired with a ‘constrained output mode,’ facilitating large-scale application development and tech upgrades.
Mixtral 8x22B Instruct is out. It significantly outperforms existing open models, and only uses 39B active parameters (making it significantly faster than 70B models during inference). 1/n pic.twitter.com/EbDLMHcBOq
— Guillaume Lample (@GuillaumeLample) April 17, 2024
With a substantial 64K tokens context window, Mixtral 8x22B ensures precise information recall from voluminous documents, further appealing to enterprise-level utilisation where handling extensive data sets is routine.
In line with fostering a collaborative and innovative AI research environment, Mistral AI has released Mixtral 8x22B under the Apache 2.0 license. This highly permissive open-source license ensures no-restriction usage and enables widespread adoption.
Statistically, Mixtral 8x22B outclasses many existing models. In head-to-head comparisons on standard industry benchmarks – ranging from common sense, reasoning, to subject-specific knowledge – Mistral’s new innovation excels. Figures released by Mistral AI illustrate that Mixtral 8x22B significantly outperforms LLaMA 2 70B model in varied linguistic contexts across critical reasoning and knowledge benchmarks:
Furthermore, in the arenas of coding and maths, Mixtral continues its dominance among open models. Updated results show an impressive performance improvement in mathematical benchmarks, following the release of an instructed version of the model:
Prospective users and developers are urged to explore Mixtral 8x22B on La Plateforme, Mistral AI’s interactive platform. Here, they can engage directly with the model.
In an era where AI’s role is ever-expanding, Mixtral 8x22B’s blend of high performance, efficiency, and open accessibility marks a significant milestone in the democratisation of advanced AI tools.
(Photo by Joshua Golde)
See also: SAS aims to make AI accessible regardless of skill set with packaged AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: 8x22b, ai, artificial intelligence, development, mistral ai, mixtral, Model, open source
0 notes
jcmarchi · 9 days
Text
SAS aims to make AI accessible regardless of skill set with packaged AI models - AI News
New Post has been published on https://thedigitalinsider.com/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models-ai-news/
SAS aims to make AI accessible regardless of skill set with packaged AI models - AI News
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on.
Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.
Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,
“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”
In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.
Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.
Expanding market footprint
Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.
Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.
Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem. 
“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”
Bringing AI to the masses
SAS is democratising AI by offering out-of-the-box, lightweight AI models – making AI accessible regardless of skill set – starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.
Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.
“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”
The first SAS Models are expected to be generally available later this year.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: AI models, decision making, deployment, llm, sas, skills
0 notes
jcmarchi · 9 days
Text
80% of AI decision makers are worried about data privacy and security - AI News
New Post has been published on https://thedigitalinsider.com/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security-ai-news/
80% of AI decision makers are worried about data privacy and security - AI News
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Organisations are enthusiastic about generative AI’s potential for increasing their business and people productivity, but lack of strategic planning and talent shortages are preventing them from realising its true value.
This is according to a study conducted in early 2024 by Coleman Parkes Research and sponsored by data analytics firm SAS, which surveyed 300 US GenAI strategy or data analytics decision makers to pulse check major areas of investment and the hurdles organisations are facing.
Marinela Profi, strategic AI advisor at SAS, said: “Organisations are realising that large language models (LLMs) alone don’t solve business challenges. 
“GenAI should be treated as an ideal contributor to hyper automation and the acceleration of existing processes and systems rather than the new shiny toy that will help organisations realise all their business aspirations. Time spent developing a progressive strategy and investing in technology that offers integration, governance and explainability of LLMs are crucial steps all organisations should take before jumping in with both feet and getting ‘locked in.’”
Organisations are hitting stumbling blocks in four key areas of implementation:
• Increasing trust in data usage and achieving compliance. Only one in 10 organisations has a reliable system in place to measure bias and privacy risk in LLMs. Moreover, 93% of U.S. businesses lack a comprehensive governance framework for GenAI, and the majority are at risk of noncompliance when it comes to regulation.
• Integrating GenAI into existing systems and processes. Organisations reveal they’re experiencing compatibility issues when trying to combine GenAI with their current systems.
• Talent and skills. In-house GenAI is lacking. As HR departments encounter a scarcity of suitable hires, organisational leaders worry they don’t have access to the necessary skills to make the most of their GenAI investment.
• Predicting costs. Leaders cite prohibitive direct and indirect costs associated with using LLMs. Model creators provide a token cost estimate (which organisations now realise is prohibitive). But the costs for private knowledge preparation, training and ModelOps management are lengthy and complex.
Profi added: “It’s going to come down to identifying real-world use cases that deliver the highest value and solve human needs in a sustainable and scalable manner. 
“Through this study, we’re continuing our commitment to helping organisations stay relevant, invest their money wisely and remain resilient. In an era where AI technology evolves almost daily, competitive advantage is highly dependent on the ability to embrace the resiliency rules.”
Details of the study were unveiled today at SAS Innovate in Las Vegas, SAS Software’s AI and analytics conference for business leaders, technical users and SAS partners.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: llm, privacy, productivity, sas, security
0 notes
jcmarchi · 10 days
Text
Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI - AI News
New Post has been published on https://thedigitalinsider.com/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai-ai-news/
Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI - AI News
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
AI News caught up with president of Ikigai Labs, Kamal Ahluwalia, to discuss all things gen AI, including top tips on how to adopt and utilise the tech, and the importance of embedding ethics into AI design.
Could you tell us a little bit about Ikigai Labs and how it can help companies?
Ikigai is helping organisations transform sparse, siloed enterprise data into predictive and actionable insights with a generative AI platform specifically designed for structured, tabular data.  
A significant portion of enterprise data is structured, tabular data, residing in systems like SAP and Salesforce. This data drives the planning and forecasting for an entire business. While there is a lot of excitement around Large Language Models (LLMs), which are great for unstructured data like text, Ikigai’s patented Large Graphical Models (LGMs), developed out of MIT, are focused on solving problems using structured data.  
Ikigai’s solution focuses particularly on time-series datasets, as enterprises run on four key time series: sales, products, employees, and capital/cash. Understanding how these time series come together in critical moments, such as launching a new product or entering a new geography, is crucial for making better decisions that drive optimal outcomes. 
How would you describe the current generative AI landscape, and how do you envision it developing in the future? 
The technologies that have captured the imagination, such as LLMs from OpenAI, Anthropic, and others, come from a consumer background. They were trained on internet-scale data, and the training datasets are only getting larger, which requires significant computing power and storage. It took $100m to train GPT4, and GP5 is expected to cost $2.5bn. 
This reality works in a consumer setting, where costs can be shared across a very large user set, and some mistakes are just part of the training process. But in the enterprise, mistakes cannot be tolerated, hallucinations are not an option, and accuracy is paramount. Additionally, the cost of training a model on internet-scale data is just not affordable, and companies that leverage a foundational model risk exposure of their IP and other sensitive data.  
While some companies have gone the route of building their own tech stack so LLMs can be used in a safe environment, most organisations lack the talent and resources to build it themselves. 
In spite of the challenges, enterprises want the kind of experience that LLMs provide. But the results need to be accurate – even when the data is sparse – and there must be a way to keep confidential data out of a foundational model. It’s also critical to find ways to lower the total cost of ownership, including the cost to train and upgrade the models, reliance on GPUs, and other issues related to governance and data retention. All of this leads to a very different set of solutions than what we currently have. 
How can companies create a strategy to maximise the benefits of generative AI? 
While much has been written about Large Language Models (LLMs) and their potential applications, many customers are asking “how do I build differentiation?”  
With LLMs, nearly everyone will have access to the same capabilities, such as chatbot experiences or generating marketing emails and content – if everyone has the same use cases, it’s not a differentiator. 
The key is to shift the focus from generic use cases to finding areas of optimisation and understanding specific to your business and circumstances. For example, if you’re in manufacturing and need to move operations out of China, how do you plan for uncertainty in logistics, labour, and other factors? Or, if you want to build more eco-friendly products, materials, vendors, and cost structures will change. How do you model this? 
These use cases are some of the ways companies are attempting to use AI to run their business and plan in an uncertain world. Finding specificity and tailoring the technology to your unique needs is probably the best way to use AI to find true competitive advantage.  
What are the main challenges companies face when deploying generative AI and how can these be overcome? 
Listening to customers, we’ve learned that while many have experimented with generative AI, only a fraction have pushed things through to production due to prohibitive costs and security concerns. But what if your models could be trained just on your own data, running on CPUs rather than requiring GPUs, with accurate results and transparency around how you’re getting those results? What if all the regulatory and compliance issues were addressed, leaving no questions about where the data came from or how much data is being retrained? This is what Ikigai is bringing to the table with Large Graphical Models.  
One challenge we’ve helped businesses address is the data problem. Nearly 100% of organisations are working with limited or imperfect data, and in many cases, this is a barrier to doing anything with AI. Companies often talk about data clean-up, but in reality, waiting for perfect data can hinder progress. AI solutions that can work with limited, sparse data are essential, as they allow companies to learn from what they have and account for change management. 
The other challenge is how internal teams can partner with the technology for better outcomes. Especially in regulated industries, human oversight, validation, and reinforcement learning are necessary. Adding an expert in the loop ensures that AI is not making decisions in a vacuum, so finding solutions that incorporate human expertise is key. 
To what extent do you think adopting generative AI successfully requires a shift in company culture and mindset? 
Successfully adopting generative AI requires a significant shift in company culture and mindset, with strong commitment from executive and continuous education. I saw this firsthand at Eightfold when we were bringing our AI platform to companies in over 140 countries. I always recommend that teams first educate executives on what’s possible, how to do it, and how to get there. They need to have the commitment to see it through, which involves some experimentation and some committed course of action. They must also understand the expectations placed on colleagues, so they can be prepared for AI becoming a part of daily life. 
Top-down commitment, and communication from executives goes a long way, as there’s a lot of fear-mongering suggesting that AI will take jobs, and executives need to set the tone that, while AI won’t eliminate jobs outright, everyone’s job is going to change in the next couple of years, not just for people at the bottom or middle levels, but for everyone. Ongoing education throughout the deployment is key for teams learning how to get value from the tools, and adapt the way they work to incorporate the new skillsets.  
It’s also important to adopt technologies that play to the reality of the enterprise. For example, you have to let go of the idea that you need to get all your data in order to take action. In time-series forecasting, by the time you’ve taken four quarters to clean up data, there’s more data available, and it’s probably a mess. If you keep waiting for perfect data, you won’t be able to use your data at all. So AI solutions that can work with limited, sparse data are crucial, as you have to be able to learn from what you have. 
Another important aspect is adding an expert in the loop. It would be a mistake to assume AI is magic. There are a lot of decisions, especially in regulated industries, where you can’t have AI just make the decision. You need oversight, validation, and reinforcement learning – this is exactly how consumer solutions became so good.  
Are there any case studies you could share with us regarding companies successfully utilising generative AI? 
One interesting example is a Marketplace customer that is using us to rationalise their product catalogue. They’re looking to understand the optimal number of SKUs to carry, so they can reduce their inventory carrying costs while still meeting customer needs. Another partner does workforce planning, forecasting, and scheduling, using us for labour balancing in hospitals, retail, and hospitality companies. In their case, all their data is sitting in different systems, and they must bring it into one view so they can balance employee wellness with operational excellence. But because we can support a wide variety of use cases, we work with clients doing everything from forecasting product usage as part of a move to a consumption-based model, to fraud detection. 
You recently launched an AI Ethics Council. What kind of people are on this council and what is its purpose? 
Our AI Ethics Council is all about making sure that the AI technology we’re building is grounded in ethics and responsible design. It’s a core part of who we are as a company, and I’m humbled and honoured to be a part of it alongside such an impressive group of individuals. Our council includes luminaries like Dr. Munther Dahleh, the Founding Director of the Institute for Data Systems and Society (IDSS) and a Professor at MIT; Aram A. Gavoor, Associate Dean at George Washington University and a recognised scholar in administrative law and national security; Dr. Michael Kearns, the National Center Chair for Computer and Information Science at the University of Pennsylvania; and Dr. Michael I. Jordan, a Distinguished Professor at UC Berkeley in the Departments of Electrical Engineering and Computer Science, and Statistics. I am also honoured to serve on this council alongside these esteemed individuals.  
The purpose of our AI Ethics Council is to tackle pressing ethical and security issues impacting AI development and usage. As AI rapidly becomes central to consumers and businesses across nearly every industry, we believe it is crucial to prioritise responsible development and cannot ignore the need for ethical considerations. The council will convene quarterly to discuss important topics such as AI governance, data minimisation, confidentiality, lawfulness, accuracy and more. Following each meeting, the council will publish recommendations for actions and next steps that organisations should consider moving forward. As part of Ikigai Labs’ commitment to ethical AI deployment and innovation, we will implement the action items recommended by the council. 
Ikigai Labs raised $25m funding in August last year. How will this help develop the company, its offerings and, ultimately, your customers? 
We have a strong foundation of research and innovation coming out of our core team with MIT, so the funding this time is focused on making the solution more robust, as well as bringing on the team that works with the clients and partners.  
We can solve a lot of problems but are staying focused on solving just a few meaningful ones through time-series super apps. We know that every company runs on four time series, so the goal is covering these in depth and with speed: things like sales forecasting, consumption forecasting, discount forecasting, how to sunset products, catalogue optimisation, etc. We’re excited and looking forward to putting GenAI for tabular data into the hands of as many customers as possible. 
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: data, ethics, generative ai, Ikigai Labs, llm
0 notes
jcmarchi · 11 days
Text
Hugging Face launches Idefics2 vision-language model
New Post has been published on https://thedigitalinsider.com/hugging-face-launches-idefics2-vision-language-model/
Hugging Face launches Idefics2 vision-language model
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Hugging Face has announced the release of Idefics2, a versatile model capable of understanding and generating text responses based on both images and texts. The model sets a new benchmark for answering visual questions, describing visual content, story creation from images, document information extraction, and even performing arithmetic operations based on visual input.
Idefics2 leapfrogs its predecessor, Idefics1, with just eight billion parameters and the versatility afforded by its open license (Apache 2.0), along with remarkably enhanced Optical Character Recognition (OCR) capabilities.
The model not only showcases exceptional performance in visual question answering benchmarks but also holds its ground against far larger contemporaries such as LLava-Next-34B and MM1-30B-chat:
Central to Idefics2’s appeal is its integration with Hugging Face’s Transformers from the outset, ensuring ease of fine-tuning for a broad array of multimodal applications. For those eager to dive in, models are available for experimentation on the Hugging Face Hub.
A standout feature of Idefics2 is its comprehensive training philosophy, blending openly available datasets including web documents, image-caption pairs, and OCR data. Furthermore, it introduces an innovative fine-tuning dataset dubbed ‘The Cauldron,’ amalgamating 50 meticulously curated datasets for multifaceted conversational training.
Idefics2 exhibits a refined approach to image manipulation, maintaining native resolutions and aspect ratios—a notable deviation from conventional resizing norms in computer vision. Its architecture benefits significantly from advanced OCR capabilities, adeptly transcribing textual content within images and documents, and boasts improved performance in interpreting charts and figures.
Simplifying the integration of visual features into the language backbone marks a shift from its predecessor’s architecture, with the adoption of a learned Perceiver pooling and MLP modality projection enhancing Idefics2’s overall efficacy.
This advancement in vision-language models opens up new avenues for exploring multimodal interactions, with Idefics2 poised to serve as a foundational tool for the community. Its performance enhancements and technical innovations underscore the potential of combining visual and textual data in creating sophisticated, contextually-aware AI systems.
For enthusiasts and researchers looking to leverage Idefics2’s capabilities, Hugging Face provides a detailed fine-tuning tutorial.
See also: OpenAI makes GPT-4 Turbo with Vision API generally available
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, benchmark, hugging face, idefics 2, idefics2, Model, vision-language
0 notes
jcmarchi · 12 days
Text
OpenAI chooses Tokyo for its first Asian office
New Post has been published on https://thedigitalinsider.com/openai-chooses-tokyo-for-its-first-asian-office/
OpenAI chooses Tokyo for its first Asian office
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
OpenAI has announced the opening of a new office in Tokyo to drive its expansion into the Asian market.
The new office aims to foster collaboration with the Japanese government, local businesses, and research institutions to develop AI tools tailored to Japan’s unique requirements.
Tokyo was selected for OpenAI’s first Asian venture due to its global leadership in technology, a culture dedicated to service, and an innovative community.
“We’re excited to be in Japan which has a rich history of people and technology coming together to do more,” explained Sam Altman, CEO of OpenAI. “We believe AI will accelerate work by empowering people to be more creative and productive, while also delivering broad value to current and new industries that have yet to be imagined.”
To ensure effective engagement within the local community and spearhead OpenAI’s initiatives in Japan, Tadao Nagasaki has been welcomed as the president of OpenAI Japan. Nagasaki’s role will involve leading commercial and market engagement efforts and building a local team to progress global affairs, go-to-market, communications, operations, and other functions catered to Japan.
OpenAI is granting local businesses early access to a customised GPT-4 model optimised for the Japanese language. This custom model boasts enhanced performance in translating and summarising Japanese text, offers cost-effectiveness, and operates up to three times faster than its predecessor. 
Speak – a leading English learning app in Japan – reportedly benefits from faster tutor explanations in Japanese with a significant reduction in token cost, facilitating improved quality of tutor feedback across more applications with higher limits per user.
The new office positions OpenAI closer to major businesses such as Daikin, Rakuten, and TOYOTA Connected, which are leveraging ChatGPT Enterprise to streamline complex business operations, assist in data analysis, and improve internal reporting.
Local governments, including Yokosuka City, are adopting the technology to enhance public service efficiency. Yokosuka City has notably expanded ChatGPT access to nearly all city employees, with 80 percent reporting productivity gains.
The Japanese government’s role as a leading voice in AI policy – especially after chairing the Hiroshima AI Process – aims to foster AI development aligned with human dignity, diversity, and inclusion, and sustainable societies. OpenAI seeks to contribute to the local ecosystem and explore AI solutions for societal challenges, such as rural depopulation and labour shortages, within the region.
OpenAI’s expansion into Japan highlights its global mission to ensure artificial general intelligence benefits all of humanity, underlining the importance of incorporating diverse perspectives.
(Photo by Jezael Melgoza)
See also: US and Japan announce sweeping AI and tech collaboration
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: asia, japan, openai
0 notes