Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE INDEX
  • 09 December 2020

Six researchers who are shaping the future of artificial intelligence

  • Gemma Conroy ,
  • Hepeng Jia ,
  • Benjamin Plackett &

You can also search for this author in PubMed   Google Scholar

Andy Tay is a science writer in Singapore.

As artificial intelligence (AI) becomes ubiquitous in fields such as medicine, education and security, there are significant ethical and technical challenges to overcome.

CYNTHIA BREAZEAL: Personal touch

Illustrated portrait of Cynthia Breazeal

Credit: Taj Francis

While the credits to Star Wars drew to a close in a 1970s cinema, 10-year-old Cynthia Breazeal remained fixated on C-3PO, the anxious robot. “Typically, when you saw robots in science fiction, they were mindless, but in Star Wars they had rich personalities and could form friendships,” says Breazeal, associate director of the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge, Massachusetts. “I assumed these robots would never exist in my lifetime.”

A pioneer of social robotics and human–robot interaction, Breazeal has made a career of conceptualizing and building robots with personality. As a master’s student at MIT’s Humanoid Robotics Group, she created her first robot, an insectile machine named Hannibal that was designed for autonomous planetary exploration and funded by NASA.

Some of the best-known robots Breazeal developed as a young researcher include Kismet, one of the first robots that could demonstrate social and emotional interactions with humans; Cog, a humanoid robot that could track faces and grasp objects; and Leonardo, described by the Institute of Electrical and Electronics Engineers in New Jersey as “one of the most sophisticated social robots ever built”.

ai researcher

Nature Index 2020 Artificial intelligence

In 2014, Breazeal founded Jibo, a Boston-based company that launched her first consumer product, a household robot companion, also called Jibo. The company raised more than US$70 million and sold more than 6,000 units. In May 2020, NTT Disruption, a subsidiary of London-based telecommunications company, NTT, bought the Jibo technology, and plans to explore the robot’s applications in health care and education.

Breazeal returned to academia full time this year as director of the MIT Personal Robots Group. She is investigating whether robots such as Jibo can help to improve students’ mental health and wellbeing by providing companionship. In a preprint published in July, which has yet to be peer-reviewed, Breazeal’s team reports that daily interactions with Jibo significantly improved the mood of university students ( S. Jeong et al . Preprint at https://arxiv.org/abs/2009.03829; 2020 ). “It’s about finding ways to use robots to help support people,” she says.

In April 2020, Breazeal launched AI Education, a free online resource that teaches children how to design and use AI responsibly. “Our hope is to turn the hundreds of students we’ve started with into tens of thousands in a couple of years,” says Breazeal. — by Benjamin Plackett

CHEN HAO: Big picture

Illustrated portrait of Chen Hao

Analysing medical images is an intensive and technical task, and there is a shortage of pathologists and radiologists to meet demands. In a 2018 survey by the UK’s Royal College of Pathologists, just 3% of the National Health Service histopathology departments (which study diseases in tissues) said they had enough staff. A June 2020 report published by the Association of American Medical Colleges found that the United States’ shortage of physician specialists could climb to nearly 42,000 by 2033.

AI systems that can automate part of the process of medical imaging analysis could be the key to easing the burden on specialists. They can reduce tasks that usually take hours or days to seconds, says Chen Hao, founder of Imsight, an AI medical imaging start-up based in Shenzhen, China.

Launched in 2017, Imsight’s products include Lung-Sight, which can automatically detect and locate signs of disease in CT scans, and Breast-Sight, which identifies and measures the metastatic area in a tissue sample. “The analysis allows doctors to make a quick decision based on all of the information available,” says Chen.

Since the outbreak of COVID-19, two of Shenzhen’s largest hospitals have been using Imsight’s imaging technology to analyse subtle changes in patients’ lungs caused by treatment, which enables doctors to identify cases with severe side effects.

In 2019, Chen received the Young Scientist Impact Award from the Medical Image Computing and Computer-Assisted Intervention Society, a non-profit organization in Rochester, Minnesota. The award recognized a paper he led that proposed using a neural network to process fetal ultrasound images ( H. Chen et al. in Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015 (eds N. Navab et al. ) 507–514; Springer, 2015 ). The technique, which has since been adopted in clinical practice in China, reduces the workload of the sonographer.

Despite the rapid advancement of AI’s role in health care, Chen rejects the idea that doctors can be easily replaced. “AI will not replace doctors,” he says. “But doctors who are better able to utilize AI will replace doctors who cannot.” — by Hepeng Jia

ANNA SCAIFE: Star sifting

Illustrated portrait of Anna Scaife

When construction of the Square Kilometre Array (SKA) is complete , it will be the world’s largest radio telescope. With roughly 200 radio dishes in South Africa and 130,000 antennas in Australia expected to be installed by the 2030s, it will produce an enormous amount of raw data, more than current systems can efficiently transmit and process.

Anna Scaife, professor of radio astronomy at the University of Manchester, UK, is building an AI system to automate radio astronomy data processing. Her aim is to reduce manual identification, classification and cataloguing of signals from astronomical objects such as radio galaxies, active galaxies that emit more light at radio wavelengths than at visible wavelengths.

In 2019, Scaife was the recipient of the Jackson-Gwilt Medal, one of the highest honours bestowed by the UK Royal Astronomical Society (RAS). The RAS recognized a study led by Scaife, which outlined data calibration models for Europe’s Low Frequency Array (LOFAR) telescope, the largest radio telescope operating at the lowest frequencies that can be observed from Earth ( A. M. M. Scaife and G. H. Heald Mon. Not. R. Astron. Soc. 423 , L30–L34; 2012 ). The techniques in Scaife’s paper underpin most low-frequency radio observations today.

“It’s a very peculiar feeling to win an RAS medal,” says Scaife. “It’s a mixture of excitement and disbelief, especially because you don’t even know that you were being considered, so you don’t have any opportunity to prepare yourself. Suddenly, your name is on a list that commemorates more than 100 years of astronomy history, and you’ve just got to deal with that.”

Scaife is the academic co-director of Policy@Manchester, the University of Manchester’s policy engagement institute, where she helps researchers to better communicate their findings to policymakers. She also runs a data science training network that involves South African and UK partner universities, with the aim to build a team of researchers to work with the SKA once it comes online. “I hope that the training programmes I have developed can equip young people with skills for the data science sector,” says Scaife. — by Andy Tay

TIMNIT GEBRU: Algorithmic bias

Illustrated portrait of Timnit Gebru

Computer vision is one of the most rapidly developing areas of AI. Algorithms trained to read and interpret images are the foundation of technologies such as self-driving cars, surveillance and augmented reality.

Timnit Gebru, a computer scientist and former co-lead of the Ethical AI Team at Google in Mountain View, California, recognizes the promise of such advances, but is concerned about how they could affect underrepresented communities, particularly people of colour . “My research is about trying to minimize and mitigate the negative impacts of AI,” she says.

In a 2018 study , Gebru and Joy Buolamwini, a computer scientist at the MIT Media Lab, concluded that three commonly used facial analysis algorithms drew overwhelmingly on data obtained from light-skinned people ( J. Buolamwini and T. Gebru. Proc. Mach. Learn. Res. 81 , 77–91; 2018 ). Error rates for dark-skinned females were found to be as high as 34.7% , due to a lack of data, whereas the maximum error rate for light-skinned males was 0.8%. This could result in people with darker skin getting inaccurate medical diagnoses, says Gebru. “If you’re using this technology to detect melanoma from skin photos, for example, then a lot of dark-skinned people could be misdiagnosed.”

Facial recognition used for government surveillance, such as during the Hong Kong protests in 2019, is also highly problematic , says Gebru, because the technology is more likely to misidentify a person with darker skin. “I’m working to have face surveillance banned,” she says. “Even if dark-skinned people were accurately identified, it’s the most marginalized groups that are most subject to surveillance.”

In 2017, as a PhD student at Stanford University in California under the supervision of Li Fei-Fei , Gebru co-founded the non-profit, Black in AI, with Rediet Abebe, a computer scientist at Cornell University in Ithaca, New York. The organization seeks to increase the presence of Black people in AI research by providing mentorship for researchers as they apply to graduate programmes, navigate graduate school, and enter and progress through the postgraduate job market. The organization is also advocating for structural changes within institutions to address bias in hiring and promotion decisions. Its annual workshop calls for papers with at least one Black researcher as the main author or co-author. — by Benjamin Plackett

YUTAKA MATSUO: Internet miner

Illustrated portrait of Yutaka Matsuo

In 2010, Yutaka Matsuo created an algorithm that could detect the first signs of earthquakes by monitoring Twitter for mentions of tremors. His system not only detected 96% of the earthquakes that were registered by the Japan Meteorological Agency (JMA), it also sent e-mail alerts to registered users much faster than announcements could be broadcast by the JMA.

He applied a similar web-mining technique to the stock market. “We were able to classify news articles about companies as either positive or negative,” says Matsuo. “We combined that data to accurately predict profit growth and performance.”

Matsuo’s ability to extract valuable information from what people are saying online has contributed to his reputation as one of Japan’s leading AI researchers. He is a professor at the University of Tokyo’s Department of Technology Management and president of the Japan Deep Learning Association, a non-profit organization that fosters AI researchers and engineers by offering training and certification exams. In 2019, he was the first AI specialist added to the board of Japanese technology giant Softbank.

Over the past decade, Matsuo and his team have been supporting young entrepreneurs in launching internationally successful AI start-ups. “We want to create an ecosystem like Silicon Valley, which Japan just doesn’t have,” he says.

Among the start-ups supported by Matsuo is Neural Pocket, launched in 2018 by Roi Shigematsu, a University of Tokyo graduate. The company analyses photos and videos to provide insights into consumer behaviour.

Matsuo is also an adviser for ReadyFor, one of Japan’s earliest crowd-funding platforms. The company was launched in 2011 by Haruka Mera, who first collaborated with Matsuo as an undergraduate student at Keio University in Tokyo. The platform is raising funds for people affected by the COVID-19 pandemic, and reports that its total transaction value for donations rose by 4,400% between March and April 2020.

Matsuo encourages young researchers who are interested in launching AI start-ups to seek partnerships with industry. “Japanese society is quite conservative,” he says. “If you’re older, you’re more likely to get a large budget from public funds, but I’m 45, and that’s still considered too young.” — by Benjamin Plackett

DACHENG TAO: Machine visionary

Illustrated portrait of Dacheng Tao

By 2030, an estimated one in ten cars globally will be self-driving. The key to getting these autonomous vehicles on the road is designing computer-vision systems that can identify obstacles to avoid accidents at least as effectively as a human driver .

Neural networks, sets of AI algorithms inspired by neurological processes that fire in the human cerebral cortex, form the ‘brains’ of self-driving cars. Dacheng Tao, a computer scientist at the University of Sydney, Australia, designs neural networks for computer-vision tasks. He is also building models and algorithms that can process videos captured by moving cameras, such as those in self-driving cars.

“Neural networks are very useful for modelling the world,” says Tao, director of the UBTECH Sydney Artificial Intelligence Centre, a partnership between the University of Sydney and global robotics company UBTECH.

In 2017, Tao was awarded an Australian Laureate Fellowship for a five-year project that uses deep-learning techniques to improve moving-camera computer vision in autonomous machines and vehicles. A subset of machine learning, deep learning uses neural networks to build systems that can ‘learn’ through their own data processing.

Since launching in 2018, Tao’s project has resulted in more than 40 journal publications and conference papers. He is among the most prolific researchers in AI research output from 2015 to 2019, as tracked by the Dimensions database, and is one of Australia’s most highly cited computer scientists. Since 2015, Tao’s papers have amassed more than 42,500 citations, as indexed by Google Scholar. In November 2020, he won the Eureka Prize for Excellence in Data Science, awarded by the Australian Museum.

In 2019, Tao and his team trained a neural network to construct 3D environments using a motion-blurred image, such as would be captured by a moving car. Details, including the motion, blurring effect and depth at which it was taken, helped the researchers to recover what they describe as “the 3D world hidden under the blurs”. The findings could help self-driving cars to better process their surroundings. — by Gemma Conroy

Nature 588 , S114-S117 (2020)

doi: https://doi.org/10.1038/d41586-020-03411-0

This article is part of Nature Index 2020 Artificial intelligence , an editorially independent supplement. Advertisers have no influence over the content.

Related Articles

ai researcher

Partner content: Advancing precision medicine using AI and big data

Partner content: Using AI to accelerate drug discovery

Partner content: Using AI to make healthcare more human

Partner content: Strengthening links in the discovery chain

Partner content: LMU Munich harnesses AI to drive discovery

Partner content: Breaking AI out of the computer science bubble

Partner content: Discovering a theory to visualize the world

Partner content: Supporting the technology game-changers

Partner content: Data-driven AI illuminates the future

Partner content: The humans at the heart of AI

Partner content: New reach for computer imaging

Partner content: Building natural trust in artificial intelligence

Partner content: Raising a global centre for deep learning

Partner content: Japan’s new centre of gravity for clinical data science

Partner content: AI researchers wanted in Germany

  • Computer science
  • Institutions

AI is vulnerable to attack. Can it ever be used safely?

Outlook 25 JUL 24

AI produces gibberish when trained on too much AI-generated data

AI produces gibberish when trained on too much AI-generated data

News & Views 24 JUL 24

AI models collapse when trained on recursively generated data

AI models collapse when trained on recursively generated data

Article 24 JUL 24

Hijacked journals are still a threat — here’s what publishers can do about them

Hijacked journals are still a threat — here’s what publishers can do about them

Nature Index 23 JUL 24

Boost French research by increasing freedom for scientists and universities

Boost French research by increasing freedom for scientists and universities

World View 23 JUL 24

Microbiologist wins case against university over harassment during COVID

Microbiologist wins case against university over harassment during COVID

News 12 JUL 24

Supreme Court ruling alters risk landscape

Correspondence 30 JUL 24

ChatGPT for science: how to talk to your data

ChatGPT for science: how to talk to your data

Technology Feature 22 JUL 24

Assistant Investigator

The Stowers Institute for Medical Research is seeking applications from visionary researchers for fully-funded Assistant Investigator positions.

Kansas City, Missouri

Stowers Institute for Medical Research.

ai researcher

ICYS Research Fellow, NIMS, Japan

The International Center for Young Scientists (ICYS) of the National Institute for Materials Science (NIMS) invites applications for ICYS Research ...

Tsukuba, Ibaraki (JP)

National Institute for Materials Science (NIMS)

ai researcher

2024 Recruitment notice Shenzhen Institute of Synthetic Biology: Shenzhen, China

The wide-ranging expertise drawing from technical, engineering or science professions...

Shenzhen,China

Shenzhen Institute of Synthetic Biology

ai researcher

Global Faculty Recruitment of School of Life Sciences, Tsinghua University

The School of Life Sciences at Tsinghua University invites applications for tenure-track or tenured faculty positions at all ranks (Assistant/Ass...

Beijing, China

Tsinghua University (The School of Life Sciences)

ai researcher

Postdoctoral Fellowships: Cancer Diagnosis and Precision Oncology of Gastrointestinal Cancers

We currently have multiple postdoctoral fellowship positions within the multidisciplinary research team headed by Dr. Ajay Goel, professor and foun...

Monrovia, California

Beckman Research Institute, City of Hope, Goel Lab

ai researcher

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Caltech

Artificial Intelligence

Since the 1950s, scientists and engineers have designed computers to "think" by making decisions and finding patterns like humans do. In recent years, artificial intelligence has become increasingly powerful, propelling discovery across scientific fields and enabling researchers to delve into problems previously too complex to solve. Outside of science, artificial intelligence is built into devices all around us, and billions of people across the globe rely on it every day. Stories of artificial intelligence—from friendly humanoid robots to SkyNet—have been incorporated into some of the most iconic movies and books.

But where is the line between what AI can do and what is make-believe? How is that line blurring, and what is the future of artificial intelligence? At Caltech, scientists and scholars are working at the leading edge of AI research, expanding the boundaries of its capabilities and exploring its impacts on society. Discover what defines artificial intelligence, how it is developed and deployed, and what the field holds for the future.

Artificial Intelligence Terms to Know >

Orange and blue filtered illustration of a robot with star shapes covering the top of the frame

What Is AI ?

Artificial intelligence is transforming scientific research as well as everyday life, from communications to transportation to health care and more. Explore what defines AI, how it has evolved since the Turing Test, and the future of artificial intelligence.

Orange and blue filtered illustration of a face made of digital particles.

What Is the Difference Between "Artificial Intelligence" and "Machine Learning"?

The term "artificial intelligence" is older and broader than "machine learning." Learn how the terms relate to each other and to the concepts of "neural networks" and "deep learning."

Blue and orange filtered illustration of a robot holding a balloon and speaking to a human. Robot has thought bubbles showing comparisons of animals, fooods, and paws.

How Do Computers Learn?

Machine learning applications power many features of modern life, including search engines, social media, and self-driving cars. Discover how computers learn to make decisions and predictions in this illustration of two key machine learning models.

Orange and blue filtered cartoon drawing of vehicle intersection

How Is AI Applied in Everyday Life?

While scientists and engineers explore AI's potential to advance discovery and technology, smart technologies also directly influence our daily lives. Explore the sometimes surprising examples of AI applications.

Orange and blue filtered illustration of big data analytics stream

What Is Big Data?

The increase in available data has fueled the rise of artificial intelligence. Find out what characterizes big data, where big data comes from, and how it is used.

Orange and blue filtered illustration of robot head and human head looking at each other

Will Machines Become More Intelligent Than Humans?

Whether or not artificial intelligence will be able to outperform human intelligence—and how soon that could happen—is a common question fueled by depictions of AI in movies and other forms of popular culture. Learn the definition of "singularity" and see a timeline of advances in AI over the past 75 years.

Blue and orange filtered illustration of a self driving car

How Does AI Drive Autonomous Systems?

Learn the difference between automation and autonomy, and hear from Caltech faculty who are pushing the limits of AI to create autonomous technology, from self-driving cars to ambulance drones to prosthetic devices.

Blue and orange filtered image of a human hand touching with robot

Can We Trust AI?

As AI is further incorporated into everyday life, more scholars, industries, and ordinary users are examining its effects on society. The Caltech Science Exchange spoke with AI researchers at Caltech about what it might take to trust current and future technologies.

blue and yellow filtered image of a robot hand using a paintbrush

What is Generative AI?

Generative AI applications such as ChatGPT, a chatbot that answers questions with detailed written responses; and DALL-E, which creates realistic images and art based on text prompts; became widely popular beginning in 2022 when companies released versions of their applications that members of the public, not just experts, could easily use.

Orange and blue filtered photo of a glass building with trees, shrubs, and empty tables and chairs in the foreground

Ask a Caltech Expert

Where can you find machine learning in finance? Could AI help nature conservation efforts? How is AI transforming astronomy, biology, and other fields? What does an autonomous underwater vehicle have to do with sustainability? Find answers from Caltech researchers.

Terms to Know

A set of instructions or sequence of steps that tells a computer how to perform a task or calculation. In some AI applications, algorithms tell computers how to adapt and refine processes in response to data, without a human supplying new instructions.

Artificial intelligence describes an application or machine that mimics human intelligence.

A system in which machines execute repeated tasks based on a fixed set of human-supplied instructions.

A system in which a machine makes independent, real-time decisions based on human-supplied rules and goals.

The massive amounts of data that are coming in quickly and from a variety of sources, such as internet-connected devices, sensors, and social platforms. In some cases, using or learning from big data requires AI methods. Big data also can enhance the ability to create new AI applications.

An AI system that mimics human conversation. While some simple chatbots rely on pre-programmed text, more sophisticated systems, trained on large data sets, are able to convincingly replicate human interaction.

Deep Learning

A subset of machine learning . Deep learning uses machine learning algorithms but structures the algorithms in layers to create "artificial neural networks." These networks are modeled after the human brain and are most likely to provide the experience of interacting with a real human.

Human in the Loop

An approach that includes human feedback and oversight in machine learning systems. Including humans in the loop may improve accuracy and guard against bias and unintended outcomes of AI.

Model (computer model)

A computer-generated simplification of something that exists in the real world, such as climate change , disease spread, or earthquakes . Machine learning systems develop models by analyzing patterns in large data sets. Models can be used to simulate natural processes and make predictions.

Neural Networks

Interconnected sets of processing units, or nodes, modeled on the human brain, that are used in deep learning to identify patterns in data and, on the basis of those patterns, make predictions in response to new data. Neural networks are used in facial recognition systems, digital marketing, and other applications.

Singularity

A hypothetical scenario in which an AI system develops agency and grows beyond human ability to control it.

Training data

The data used to " teach " a machine learning system to recognize patterns and features. Typically, continual training results in more accurate machine learning systems. Likewise, biased or incomplete datasets can lead to imprecise or unintended outcomes.

Turing Test

An interview-based method proposed by computer pioneer Alan Turing to assess whether a machine can think.

Dive Deeper

A human sits at a table flexing his hand. Sensors are attached to the skin of his forearm. A robotic hand next to him mimics his motion.

More Caltech Computer and Information Sciences Research Coverage

How to Become a AI Researcher

Learn what it takes to become a AI Researcher in 2024, and how to start your journey.

  • What is a AI Researcher
  • How to Become
  • Certifications
  • Tools & Software
  • LinkedIn Guide
  • Interview Questions
  • Work-Life Balance
  • Professional Goals
  • Resume Examples
  • Cover Letter Examples

Land a AI Researcher role with Teal

How do I become a AI Researcher?

Gain a strong educational foundation, develop technical proficiency, engage in research and practical application, build a professional network, contribute to research and publish findings, stay current with ai advancements, typical requirements to become a ai researcher, educational requirements and academic pathways, building experience in ai research, key skills for aspiring ai researchers, additional qualifications for a competitive edge, alternative ways to start a ai researcher career, transitioning from adjacent technical roles, industry experience and domain expertise, self-guided learning and online education, interdisciplinary backgrounds, how to break into the industry as a ai researcher - next steps, faqs about becoming a ai researcher, how long does it take to become a ai researcher, do you need a degree to become a ai researcher, can i become a ai researcher with no experience.

AI Researcher Skills

ai researcher

Related Career Paths

Driving innovation with data, creating intelligent systems to solve complex problems

Unearthing insights from data, driving strategic decisions with predictive analytics

Transforming visual data into actionable insights, driving tech innovation forward

Driving language understanding and interaction through advanced AI technologies

Designing and innovating the future of automation, shaping the world of robotics

Start Your AI Researcher Career with Teal

Job Description Keywords for Resumes

Tackling the most challenging problems in computer science

Our teams aspire to make discoveries that positively impact society. Core to our approach is sharing our research and tools to fuel progress in the field, to help more people more quickly. We regularly publish in academic journals, release projects as open source, and apply research to Google products to benefit users at scale.

Featured research developments

ai researcher

Mitigating aviation’s climate impact with Project Contrails

ai researcher

Consensus and subjectivity of skin tone annotation for ML fairness

ai researcher

A toolkit for transparency in AI dataset documentation

ai researcher

Building better pangenomes to improve the equity of genomics

ai researcher

A set of methods, best practices, and examples for designing with AI

ai researcher

Learn more from our research

Researchers across Google are innovating across many domains. We challenge conventions and reimagine technology so that everyone can benefit.

ai researcher

Publications

Google publishes over 1,000 papers annually. Publishing our work enables us to collaborate and share ideas with, as well as learn from, the broader scientific community.

ai researcher

Research areas

From conducting fundamental research to influencing product development, our research teams have the opportunity to impact technology used by billions of people every day.

ai researcher

Tools and datasets

We make tools and datasets available to the broader research community with the goal of building a more collaborative ecosystem.

ai researcher

Meet the people behind our innovations

ai researcher

Our teams collaborate with the research and academic communities across the world

ai researcher

Partnerships to improve our AI products

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

ai researcher

Research and Teach AI

American researchers and educators are foundational to ensuring our nation’s leadership in AI. The Biden-Harris Administration is investing in helping U.S. researchers and entrepreneurs build the next generation of safe, secure, and trustworthy AI, as well as supporting educators and institutions developing the future AI workforce.

National AI Research Resource Pilot

The National AI Research Resource (NAIRR) pilot, launched by the U.S. National Science Foundation (NSF) in January 2024, aims to expand access to critical AI research resources by connecting U.S. researchers and students to compute, data, software, model, and training resources they need to engage in AI research.

National AI Research Institutes

Call for proposals.

Submit proposals to expand the NAIRR Pilot community to new and emerging researchers and to educators bringing inclusive AI educational experiences to classrooms nationwide.

Advice for renewal of existing AI Institute Awards

Resources for researchers and entrepreneurs, supercharging america’s ai workforce.

An AI-ready workforce is essential for the United States to fully realize AI’s potential to advance scientific discovery, economic prosperity, and national security. By 2025, a pilot program led by the U.S. Department of Energy, in coordination with the U.S. National Science Foundation, will have leveraged a suite of existing training programs to augment the national AI workforce at national laboratories, institutions of higher education, and other pathways. The pilot program will train more than 500 new researchers at all academic levels and career stages in a variety of critical basic research and enabling technology development areas.

AI Test Beds

Submit proposals for new approaches to develop and evaluate AI systems in real-world settings. NSF is encouraging the community to submit proposals for planning grants to expand existing test beds and infrastructure to evaluate AI systems.

Entrepreneurial Fellowships Program

Through the National Science Foundation’s partnership with Activate, this program supports budding entrepreneurs for two years, providing them mentorship, stipends, and access to vital research tools, equipment, facilities, and expertise through collaboration with host laboratories.

National Defense Science and Engineering Graduate (NDSEG) Fellowship

The Department of Defense (DoD)’s NDSEG program offers graduate fellowships in 19 research disciplines, including AI, of strategic interest to the DoD. The program provides 3-year fellowships for students at or near the beginning of graduate study.

Privacy-Preserving Data Sharing in Practice (PDaSP)

This funding opportunity aims to enable and promote data sharing in a privacy-preserving and responsible manner to harness the power and insights of data for public good, such as for training powerful AI models. Led by the U.S. National Science Foundation in partnership with the U.S. Department of Transportation, the National Institute of Standards and Technology, and two technology companies, PDaSP will specifically prioritize use-inspired and translational research that empowers federal agencies and the private sector to adopt leading-edge privacy enhancing technologies in their work.

Responsible Design, Development, and Deployment of Technologies (ReDDDoT)

Submit proposals for research, implementation, and education projects involving multi-sector teams that focus on the responsible design, development, or deployment of technologies, including AI.

Resources for Educators and Institutions

Artificial intelligence and the future of teaching and learning.

The Department of Education (ED) has released a report to guide educators in understanding what AI can do to advance educational goals, while evaluating and limiting key risks.

Computer Science for All (CSforAll)

NSF’s CSforAll program supports partnerships and research that equip high school teachers to teach computer science, K-8 teachers to incorporate computer science and computational thinking in their classes, and school districts to create computing pathways across all grades.

NSF’s EducateAI initiative invites schools, school districts, community colleges, universities, and partner institutions to submit proposals to support educators advancing inclusive computing education, integrate AI-focused curricula for high school and undergraduate classrooms, and help create engaging and comprehensive educational materials that align with the latest advancements in AI.

Make Your Voice Heard

AI only works when it works for all of us. Let us know how AI can work better for you.

The Road to Becoming an AI Researcher: Essential Steps & Qualifications

  • December 5th, 2023

The fascinating realm of artificial intelligence is constantly evolving, offering a playground for innovation to those eager to contribute. At the heart of this technological revolution is the AI Researcher, a role that merges the rigor of science with the thrill of cutting-edge developments.

In this journey, AI Researchers are the pathfinders, navigating through uncharted territories of data and algorithms to shape the future. Their mission is not just about building smarter machines, but also about understanding the implications of AI in our society. Whether it’s through improving healthcare, revolutionizing transportation, or enhancing the way we interact with technology, their work is reshaping our world. It’s no wonder that a career in AI research is highly coveted. For those of you considering this path, embarking on The Road to Becoming an AI Researcher promises a rewarding journey filled with challenges and triumphs. Moreover, diving into the depth of AI Job Roles and Career Paths can help you find your niche in this dynamic field.

Navigating the Academic Terrain: Educational Requirements for AI Researchers

Stepping into the academic arena of AI research requires not just curiosity, but a solid educational foundation. To don the hat of an AI Researcher, one typically needs to journey through rigorous higher education. Essential for laying the groundwork, degrees in computer science, cognitive science, mathematics, or related fields form the bedrock of this career. The quest often begins with a bachelor’s degree, but swiftly moves to more advanced studies—most AI Researchers hold master’s degrees or doctorates.

Aspiring researchers should focus on areas such as machine learning, neural networks, and computational statistics. Becoming well-versed in these disciplines is crucial and often augmented by certifications in specific AI technologies or methodologies. The pantheon of requisite qualifications can range from a Master of Science in AI to specialized certificates in deep learning or reinforcement learning. For those with a strong passion for data and its intersection with AI, observing the interplay of roles and skills found in AI Data Scientist Roles can be insightful. Likewise, for those leaning more towards the engineering aspect, delineating a robust Career Path in AI Engineering can lay out a clear trajectory towards becoming an influential AI Researcher.

Acquiring the Tools of the Trade: Key Skills for AI Research Mastery

Technical skills.

Mastering the core technical skills is a pre-requisite for any AI Researcher looking to make a significant impact. This includes a robust understanding of algorithms, probability and statistics, and data modeling. AI Researchers must be proficient in machine learning techniques, including but not limited to supervised and unsupervised learning, neural networks, and deep learning frameworks.

Programming Skills

On top of theoretical knowledge, practical programming skills are indispensable. Proficiency in languages such as Python, R, and Java is often essential, as these are the mainstays in developing AI models. A deep understanding of libraries and frameworks like TensorFlow, Keras, or PyTorch accelerates the building and testing of complex systems. It’s also beneficial to be conversant with database management and query languages. For an in-depth look at a career where these skills are applied, consider exploring the Machine Learning Engineer Careers .

Soft Skills

While the ability to crunch numbers and write code is critical, an AI Researcher’s soft skills are equally important. Clear communication allows for the effective conveyance of complex ideas to non-experts. Creativity is a must for pioneering novel solutions, while critical thinking aids in troubleshooting unforeseen obstacles.

Collaboration is also key, as AI projects are often interdisciplinary endeavors. Leadership and project management can help advance research objectives smoothly. These are the kinds of skills that are highly valued in roles like the AI Solutions Architect , a profession where you orchestrate the big-picture elements and finer details of AI projects to fruition.

Experience That Counts: Practical Steps to Gain AI Research Expertise

Theory and education are the bedrock of any AI Researcher’s toolbox, but it is the hands-on experience that sharpens and refines these tools, preparing you for the rigors of practical AI research. Internships, project work, and real-world industry exposure stand out as the anvil on which this expertise is forged. These experiences are not just resume embellishments—they’re essential testing grounds where knowledge meets application.

Internships offer a first glimpse into the inner workings of AI projects, serving as a vital stepping stone between academia and industry. They provide mentorship, networking opportunities, and a chance to solve real-world problems. Industry exposure, on the other hand, helps you understand the practical aspects and business implications of AI algorithms and systems. Projects, whether personal, academic, or collaborative, allow for deep dives into specific areas of AI, fostering innovation and specialization. By contributing to such projects, you may find yourself ready to take on roles such as an AI Project Manager , a career that requires a thorough grounding in managing AI initiatives from conception to delivery.

Type of ExperienceBenefits
InternshipsMentorship, real-world problem solving, industry insight
Academic ProjectsResearch skills, specialization, academic collaboration
Industry ExposureBusiness acumen, applied AI experience, networking
Collaborative ProjectsTeamwork, cross-disciplinary skills, innovation
Personal ProjectsCreativity, self-direction, problem-solving

Delving deeper into practical involvement, don’t overlook the role of data analysis in AI—it is foundational to the field. Engaging with datasets, gleaning insights, and interpreting results is a day-to-day reality for AI Researchers. For those inclined towards data’s narrative power, considering the Role of AI Analysts can provide perspective on how to weave stories from numbers and impact decision-making processes in various industries.

Ultimately, diversifying your portfolio of experiences is key to gaining AI research expertise. Through internships, industry roles, and projects, you not only hone existing skills but also acquire new ones—each experience building upon the last, propelling you toward mastery in AI research.

Industry Navigation: Finding Your AI Research Niche

Embarking on a career in AI research is akin to setting sail in vast oceanic waters—there’s a whole wide world of specialization out there, and finding your niche could be the key to a fulfilling journey in AI. The question is not just “Where to apply my skills?” but also “Which of these many realms resonates with my passions?” Let’s steer through some prominent AI research fields where opportunity and innovation intersect.

  • Natural Language Processing (NLP) : The study of human language by machines, focusing on developing algorithms that understand, interpret, and generate human languages.
  • Computer Vision : The extraction of meaningful information from visual inputs such as images or videos, vital for developments in autonomous vehicles and facial recognition.
  • Healthcare AI : Augmenting medical diagnostics, patient care, and biomedical research with AI, which has become increasingly pivotal in personalized medicine and drug discovery.
  • AI in Robotics : Blending AI with robotics to create intelligent machines that can perform complex tasks autonomously. For those intrigued by the synergy of AI and physical machines, a dive into Robotics Engineer Careers in AI would be ideal.
  • Ethical AI : Developing frameworks and standards to ensure AI systems are designed and deployed responsibly.
  • AI in Finance : Harnessing AI to revolutionize trading, fraud detection, and customer service in the financial sector.
  • Retail and E-Commerce AI : Personalizing shopping experiences and optimizing supply chains with AI analytics.
  • AI Product Management : Orchestrating the development and delivery of AI-driven products. This particular niche blends strategic vision with technical expertise, and for those looking to lead at the convergence of AI and product development, exploring a Career in AI Product Management would be an enlightening next step.
  • AI for Sustainability : Leveraging AI to tackle environmental issues, from climate change prediction to conservation efforts.

Choosing a specialization in AI research often boils down to where your intellectual curiosity shines the brightest and where your expertise can have the greatest impact. Consider where your strengths can meet the demands of the market and remember, the niche you choose today is just the starting point of an ever-evolving path in AI Research.

Building a Robust Portfolio: Showcasing Your AI Research Capabilities

For AI Researchers, a professional portfolio is not just a collection of past work—it’s a testament to your skills, a blueprint of your intellectual journey, and a compelling narrative of your accomplishments. It’s essential to meticulously curate projects and materials that highlight both the breadth and depth of your capabilities in AI research.

Begin with your most impactful projects, weaving in detailed explanations of the problems, your approach, the methodologies you employed, and the outcomes. Don’t shy away from including code snippets, algorithms, and models you’ve built, as these pieces exhibit the technical finesse at the core of your work. Always include metrics and analyses that demonstrate the tangible impact of your research, as data speaks volumes to prospective employers or academic programs.

Papers, publications, and presentations should also find a place in your portfolio. They underscore your ability to communicate complex ideas and contribute to the larger academic and professional community. Consider adding collaborative elements to showcase your teamwork and interdisciplinary engagement, essential aspects in fields like AI in Software Development Roles where various expertise converges to create innovative solutions.

When thinking about the layout and design of your portfolio, simplicity and accessibility are key. Make it as easy as possible for reviewers to navigate through your work, understand it, and appreciate the implications of your research. Additionally, testimonials and endorsements can bolster your portfolio by providing a credible outside perspective on your work and work ethic.

In an ever-expanding AI market, it is also wise to demonstrate your versatility and adaptability to various domains within AI. By including a diverse array of projects—ranging from theoretical to applied AI—you position yourself as an adaptable asset capable of thriving in multiple areas including consulting. To get a sense of how varied AI applications can be, exploring AI Consultant career opportunities may unveil the vast array of sectors eager for AI expertise.

Remember, your portfolio is the narrative of your AI Research journey; make it a compelling one that invites readers into the story of your professional growth and aspirations. With a strong portfolio, you’re not just another applicant; you’re a proven innovator ready to make significant contributions to the field of AI.

AI Research in the Job Market: Career Opportunities and Pathways

Stepping into the world of AI research opens up a galaxy of burgeoning career opportunities. The current job market is ripe with demand for professionals skilled in artificial intelligence, as industries across the spectrum seek to harness the power of AI to innovate, streamline, and grow.

Job Titles in AI Research

  • AI Research Scientist : Pushes the boundaries of what machines are capable of by developing new algorithms and technologies.
  • Deep Learning Engineer : Specializes in using deep neural networks to solve complex problems and design intelligent systems.
  • Applied AI Researcher : Focuses on translating AI research into practical applications for business and consumer use.
  • Quantum Machine Learning Researcher : Explores the intersection of quantum computing and machine learning to create groundbreaking AI solutions.
  • AI Ethicist : Addresses the moral implications of AI and ensures that AI systems are designed and implemented ethically.
  • AI Technical Writer : Essential for distilling complex AI concepts into accessible and informative content, a role perfectly detailed in the AI Technical Writer Career Path .

Career Progression for AI Researchers

Career advancement for AI Researchers typically starts with entry-level positions, often following rigorous academic preparation or internships. As AI Researchers accumulate experience and demonstrate their abilities, they can move on to more senior roles, taking on greater responsibilities such as leading research teams or managing large-scale projects. An experienced AI Researcher could ascend to the role of Chief AI Scientist or even branch off into strategic roles like AI Product Manager or AI Start-up Founder.

The journey doesn’t end there, as continuous learning and specialization can open doors to prestigious positions with leading tech firms, research institutions, or innovative start-ups. Securing a prominent role often involves pushing the frontier of AI, contributing to influential research papers, or developing patents. For a broad view of potential AI career trajectories, the Overview of AI Job Roles serves as an insightful resource.

Ultimately, forging a successful career in AI research means staying attuned to industry needs, focusing on in-demand specialties, and staying flexible in an ever-changing technological landscape. Whether you’re communicating complex subjects as an AI Technical Writer or leading cutting-edge research projects, AI offers a dynamic career with the potential for lifelong learning and impact.

Continuous Learning: Staying Current as an AI Researcher

As an AI Researcher, the learning never stops. The field is in a state of flux, with new discoveries and technologies surfacing at breakneck speed. Engaging in lifelong learning is not just recommended; it’s essential to remain at the forefront of AI research. This means regularly consuming the latest research papers, attending conferences, and participating in workshops to keep your skills sharp and your knowledge up to date.

  • Academic Journals: Subscribe to leading journals such as ‘Journal of Artificial Intelligence Research’ or ‘AI Magazine’ for new insights and discoveries.
  • Online Courses: Platforms like Coursera, edX, and Udacity offer advanced courses on AI topics to refine your expertise.
  • Conferences: Attend international conferences like NeurIPS, ICML, or CVPR to network and learn from pioneers in the field.
  • Webinars and Workshops: Participate in online webinars and workshops that provide hands-on experience with new tools and techniques.
  • AI Meetups: Join local AI groups or online communities to exchange knowledge and collaborate on projects.
  • Specialized Training: Explore certification programs to gain expertise in a sub-domain of AI, such as reinforcement learning or generative models.

In addition to technical prowess, staying abreast of ethical considerations in AI is paramount. With AI’s growing role in decision-making, understanding the societal and ethical implications is crucial. This is why some professionals consider Becoming an AI Ethics Officer , a path that champions responsible AI development and usage.

Furthermore, AI Researchers often dovetail their technical know-how with business acumen. For those who wish to bridge the gap between technical research and business strategy, pursuing a path highlighted in AI Business Analyst: A Career Overview can be immensely satisfying and career-advancing.

Remember, as an AI Researcher, continuous learning is your most valuable asset and your most significant guarantee for staying relevant and innovative in an exhilarating and transformative field.

Reaching the Pinnacle of AI Research

The summit of AI research is not conquered easily; it demands persistent dedication, insatiable curiosity, and the resilience to navigate the relentless waves of technological progress. Yet, for those who relish the challenge, the rewards—both intellectual and professional—are immeasurable. The journey to becoming an elite AI Researcher is paved with rigorous education, hands-on experience, continuous learning, and the unyielding pursuit of innovation.

Your odyssey begins with a solid academic foundation, progresses through mastering the requisite skills and gaining invaluable experience, and further unfolds as you find your niche—each step an integral part of the overarching quest. Along the way, don’t underestimate the significance of roles that ensure AI reliability and safety, such as AI Quality Assurance Engineer Roles , or the creative aspects found in UI/UX Design in AI , both of which contribute to the AI ecosystem in profound ways. As you climb higher, remember the importance of building a robust portfolio and engaging in lifelong learning to remain at the spearhead of AI research. The path is long, and the climb is steep, but the panorama from the pinnacle of AI research is nothing short of spectacular. Forge ahead with courage and conviction—you are the architects of tomorrow’s AI horizon.

Related Content

AI Technical Writer Career Path

AI Quality Assurance Engineer Roles

UI/UX Design in AI: Career Opportunities

Overview of AI Job Roles

Career Path in AI Engineering

AI Data Scientist Roles

AI Project Manager: A Career Guide

Machine Learning Engineer Careers

AI in Software Development Roles

The Role of AI Analysts

AI Consultant: Career Opportunities

Robotics Engineer Careers in AI

AI Solutions Architect: Job Profile

Becoming an AI Ethics Officer

AI Sales Engineer Career Path

Career in AI Product Management

AI Business Analyst: A Career Overview

AI Content Strategist Job Role

AI System Administrator Careers

Latest From The Blog

Impersonation of president biden highlights election security concerns, sam altman at davos: a vision of ai’s future, from personalization to global impact, game changer: valve’s steam unleashes ai-powered game revolution with new rules, robots are coming for nearly half of our jobs – will yours be next, expert guide on ai and machine learning: comprehensive course insights, new ai tools.

What does an AI research scientist do?

Would you make a good AI research scientist? Take our career test and find your match with over 800 careers.

What is an AI Research Scientist?

An AI research scientist specializes in conducting research and development in the field of artificial intelligence (AI). These scientists work on advancing the understanding and capabilities of AI systems through theoretical exploration, experimentation, and innovation. They may work in academic institutions, research labs, or industry settings, collaborating with multidisciplinary teams to explore new algorithms, techniques, and methodologies that push the boundaries of AI.

AI research scientists may specialize in various subfields of AI, such as machine learning, natural language processing, computer vision, or robotics, depending on their interests and expertise. They help to translate theoretical advancements into practical applications, working with engineers and developers to integrate AI technologies into real-world systems and solutions.

What does an AI Research Scientist do?

An AI research scientist working on his computer.

Duties and Responsibilities AI research scientists have a range of duties and responsibilities that contribute to the advancement of artificial intelligence technologies. Here are some key responsibilities:

  • Research and Development: Conduct research to advance the state-of-the-art in AI, exploring new algorithms, techniques, and methodologies. This may involve designing experiments, collecting and analyzing data, and developing prototypes to test new ideas and theories.
  • Algorithm Development: Design and develop algorithms and models for solving complex AI problems, such as machine learning, natural language processing, computer vision, or robotics. This includes exploring novel approaches, refining existing techniques, and optimizing algorithms for performance and scalability.
  • Experimentation and Evaluation: Design and conduct experiments to evaluate the performance and effectiveness of AI algorithms and models. This may involve benchmarking against existing methods, conducting comparative studies, and analyzing results to identify strengths, weaknesses, and areas for improvement.
  • Publication and Collaboration: Publish research findings in academic journals and conferences to contribute to the broader scientific community's understanding of AI. Collaborate with colleagues, academic partners, and industry collaborators to exchange ideas, share knowledge, and advance research agendas.
  • Prototype Development: Develop prototypes and proof-of-concept implementations to demonstrate the feasibility and potential of new AI technologies. This may involve coding, testing, and iterating on software implementations to showcase the capabilities of AI algorithms in real-world scenarios.
  • Technical Leadership: Provide technical leadership and expertise within multidisciplinary teams, guiding and mentoring junior researchers and engineers. Collaborate with cross-functional teams to integrate AI technologies into products, systems, and solutions.
  • Continuous Learning and Innovation: Stay abreast of the latest developments and trends in AI research, attending conferences, workshops, and seminars, and participating in online communities and forums. Continuously explore new ideas, approaches, and technologies to drive innovation and push the boundaries of AI.
  • Ethical Considerations: Consider ethical implications and societal impacts of AI research and development, such as fairness, accountability, transparency, and privacy. Ensure that AI technologies are developed and deployed responsibly, in accordance with ethical guidelines and best practices.

Types of AI Research Scientists The following are just a few examples of the diverse roles within the field of AI research, and researchers may often specialize further within these domains or work at the intersection of multiple areas to address complex challenges in artificial intelligence.

  • Computer Vision Research Scientist: Specializes in developing algorithms and models for interpreting and understanding visual information from the world, enabling machines to analyze and make decisions based on images or video data.
  • Conversational AI Research Scientist: Focuses on natural language processing (NLP) and dialog systems, working to enhance the capabilities of conversational agents, chatbots, and virtual assistants.
  • Deep Learning Research Scientist: Concentrates on advancing deep learning techniques, architectures, and algorithms, with a focus on neural networks to enable machines to learn complex representations and solve intricate problems.
  • Human-Robot Interaction Research Scientist: Investigates methods to improve the interaction between humans and robots, addressing issues such as communication, collaboration, and understanding human behavior to enhance the effectiveness of robotic systems.
  • Machine Learning Research Scientist: Specializes in developing and refining machine learning algorithms, exploring techniques to enable machines to learn from data and make predictions or decisions without explicit programming.
  • Reinforcement Learning Research Scientist: Focuses on reinforcement learning, a subset of machine learning where agents learn to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties.
  • Robotics Research Scientist: Conducts research in the field of robotics, working on the development of robotic systems capable of perception, decision-making, and autonomous action in real-world environments.
  • Speech Recognition Research Scientist: Specializes in improving the accuracy and performance of speech recognition systems, enabling machines to transcribe spoken language into text.
  • Transfer Learning Research Scientist: Investigates techniques and methodologies for transfer learning, where knowledge gained from one task or domain is applied to improve performance on a different but related task or domain.
  • Unsupervised Learning Research Scientist: Focuses on unsupervised learning approaches, where algorithms are designed to extract patterns and structure from data without explicit labels, enabling machines to discover meaningful representations.

What is the workplace of an AI Research Scientist like?

The workplace of an AI research scientist can vary depending on factors such as the employer, industry, and specific role within the field. Many AI research scientists work in academic institutions, research labs, or government agencies, where they have access to state-of-the-art facilities and resources for conducting cutting-edge research. These environments often foster collaboration and innovation, with opportunities to work alongside other researchers, graduate students, and industry partners on interdisciplinary projects.

In addition to academic and research institutions, many AI research scientists also work in industry, particularly in technology companies and startups focused on AI and machine learning. These organizations may offer dynamic and fast-paced work environments, with opportunities to work on real-world problems and applications of AI technology. Tech companies often provide access to large-scale datasets, computing infrastructure, and specialized tools and platforms for AI research and development.

Remote work has become increasingly common in the field of AI research, particularly in light of recent global events. Many organizations offer flexible work arrangements that allow AI research scientists to work from home or other remote locations, leveraging digital communication tools and collaboration platforms to stay connected with colleagues and collaborators. Remote work offers flexibility and autonomy, allowing researchers to manage their schedules and work environments while still making significant contributions to AI research.

AI Research Scientists are also known as: AI Scientist Artificial Intelligence Research Scientist

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

SEAS shield

SEAS welcomes new faculty in Computer Science, Applied Math

Faculty bring expertise in machine learning, AI and data

Applied Mathematics , Computer Science

Harvard SEAS MDE students and Enlight co-founders Joachim Asare, Sangyu Xi, Hessan Sedaghat and Prachi Mehta on a stairwell at the MIT Sloan Product Conference in Cambridge

Enlightening analytics for the visually impaired

MDE students design accessibility tools for business owners

AI / Machine Learning , Computer Science , Design , Entrepreneurship

Stratos Idreos

Stratos Idreos Appointed Faculty Co-Director of Harvard Data Science Initiative

Computer scientist will help foster interdisciplinary collaboration and innovation across Harvard

Computer Science

Stanford AI Lab

The Stanford Artificial Intelligence Laboratory (SAIL) has been a center of excellence for Artificial Intelligence research, teaching, theory, and practice since its founding in 1963.

ai researcher

Latest News

Congratulations to stanford ai lab phd student dora zhao for an icml 2024 best paper award.

Congratulations to Stanford AI Lab PhD student Dora Zhao for an ICML 2024 Best Paper Award for a paper from her work at Sony AI on: Measure Dataset Diversity, Don’t Just Claim It

Congratulations to Aaron Lou, Chenlin Meng, and Stefano Ermon for an ICML 2024 Best Paper Award!

Congratulations to Aaron Lou, Chenlin Meng, and Stefano Ermon for an ICML 2024 Best Paper Award: Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution

Congratulations to Marco Pavone for a Robotics: Science and Systems Conference Best Paper Award!

Congratulations to Marco Pavone for winning the best paper award at the Robotics: Science and Systems Conference on AI Safety for autonomous systems.

Congratulations to Carlos Guestrin for being elected to the NAE!

Carlos Guestrin has been elected to the National Academic of Engineering “for scalable systems and algorithms enabling the broad application of machine learning in science and industry.”

Congratulations to Chris Manning on being awarded 2024 IEEE John von Neumann Medal!

Chris Manning has been awarded the 2024 IEEE John von Neumann Medal “for advances in computational representation and analysis of natural language.” This is one of IEEE’s top awards in computing, given with very broad scope “for outstanding achievements in computer-related science and technology.”

SAIL Faculty and Students Win NeurIPS Outstanding Paper Awards

Congratulations to Sanmi Koyejo and his students for winning the NeurIPS Outstanding Paper Award, and congradulations to Chris Manning, Stefano Ermon, Chelsea Finn, and their students for winning Outstanding Paper Runner Up at NeurIPS!

Prof. Fei Fei Li featured in CBS Mornings the Age of AI

Follow Prof. Li's interview with CBS Mornings and on being named the "Godmother of AI"

Latest Tweets

jmhb0 avatar

📢 Check out our ECCV paper, “Viewpoint Textual Inversion” (ViewNeTI), where we show that text-to-image diffusion models have 3D view control in their text input space https://jmhb0.github.io/view_neti/

AllanRaventos avatar

Interested in exactly solvable models of learning dynamics and implicit bias? Come check out our "Get Rich Quick" poster at the HiLD Workshop @icmlconf at 10am! With @KuninDaniel, myself, @ClementineDomi6, @FCHEN_AI, @klindt_david, @SaxeLab, and @SuryaGanguli.

Image for the Tweet beginning: Interested in exactly solvable models

We are presenting our position paper on end-to-end deep learning for relational databases! Collaboration between @StanfordAILab and @Kumo_ai_team. RDL learns directly on structured data across multiple tables, eliminates the need for feature engineering, and extends AI use

Image for the Tweet beginning: We are presenting our position

For anyone interested in model collapse, I strongly urge people to look at our COLM 2024 paper Model collapse appears when researchers intentionally induce it in ways that simply don't match what is actually done practice @alexandr_wang is wrong

AnkaReuel avatar

Some people have started to reach out to us with additional resources on open problems that we missed (thanks! 💙) – please keep them coming, we’d be excited to cite them and also add them to our living online directory of open problems in AI governance (more info on that soon!).

james_y_zou avatar

Excited that our paper quantifying #LLMs usage in paper reviews is selected as an #ICML2024 oral (top 1.5% of submissions)! 🚀 Main results👇

Image for the Tweet beginning: Excited that our paper quantifying

We Are Pleased to Welcome New Members of Our Faculty

Diyi Yang who focuses on Computational Social Science and Natural Language Processing

Sanmi Koyejo who focuses on Trustworthy Machine Learning for Healthcare and Neuroscience

See the Entire Faculty

Diyi Yang

Affiliates Program

Stanford AI Lab faculty and students enjoy chances to understand and solve the not-yet-doable pain points of industry. Get a chance to support and interact with SAIL’s brightest minds.

back to top

All Courses

  • Interview Questions
  • Free Courses
  • Career Guide
  • PGP in Data Science and Business Analytics
  • PGP in Data Science and Engineering Online
  • PGP in Data Science and Engineering (Classroom)
  • MIT Data Science and Machine Learning Course Online
  • MIT Applied Data Science Course Online
  • Data Analytics Essentials by UT Austin
  • Data Science & Business Analytics Online Course by UT Austin
  • Master of Data Science (Global) – Deakin University
  • MS in Computer Science in USA from Clark University
  • MS in Data Analytics in USA from Clark University
  • Masters in Analytics in USA from University of Arizona
  • MS in Data Analytics in USA from Walsh College
  • Master of Science in Big Data & Business Analytics in Germany
  • Online Masters in Data Science (Global)
  • Masters (MS) in Data Science Online
  • Master of Data Science Deakin (12 Months)
  • UT Austin Data Science & Business Analytics Program
  • UT Austin Data Analytics Course With Power BI Certification
  • MIT IDSS Data Science & Machine Learning Course Online
  • MIT Applied Data Science & AI Certificate Course Online
  • Artificial Intelligence & Machine Learning (AIML) Course
  • AI for Leaders & Managers (PG Certificate Course)
  • Artificial Intelligence Course for School Students
  • Machine Learning PG Program
  • MS Artificial Intelligence and Machine Learning
  • Masters in Machine Learning from USA
  • MS Artificial Intelligence and Machine Learning Walsh
  • Generative AI (GenAI) Certificate Course by Microsoft Azure
  • AI and Machine Learning Certificate Program Online by UT Austin
  • UT Austin Artificial Intelligence (AI) for Leaders & Managers
  • MIT No-Code AI and Machine Learning Course
  • Masters in Machine Learning from USA – University of Arizona
  • Full Stack Software Development Certificate Course by UT Austin
  • Full Stack Software Development Program
  • Online Cyber Security Course
  • CompTIA Security+ (Plus) Certification Training Bootcamp
  • PG Cyber Security Course Online by UT Austin
  • UT Austin Cybersecurity Certificate Program Online
  • Online Post Graduate Executive Management Program
  • MBA in USA from Walsh College
  • MBA in Germany at FOM University
  • Online MBA Equivalent PGDM Course from Great Lakes
  • Online Post Graduate Diploma in Management (PGDM) Program
  • Cloud Computing PG Program by Great Lakes
  • UT Austin Post-Graduate Program in Cloud Computing
  • PG Program UXUI Design
  • UI/UX Design PG Certificate Course Online with UT Austin
  • Digital Marketing Course Online with PG Certificate
  • Doctor Of Business Administration in Artificial Intelligence and Machine Learning
  • MBA Full Time Program
  • MS in Marketing Full Time Program
  • MS in Accounting Full Time Program
  • MS in Information Technology Full Time Program
  • MS in Finance Full Time Program
  • Study MBA in Germany at FOM University
  • M.Sc in Big Data & Business Analytics in Germany
  • Master of Science (MSc) in Big Data & Business Analytics in Germany
  • Masters (MS) in Computer Science in USA from Clark University
  • Applied Data Science Program
  • Data Analytics Essentials
  • PG Program in Artificial Intelligence & Machine Learning: Business Applications
  • Post Graduate Program in Artificial Intelligence for Leaders
  • Applications of Artificial Intelligence Program
  • Post-Graduate Program in Cloud Computing
  • Post Graduate Program in Cybersecurity
  • MIT Programa Ciencia De Dados Machine Learning
  • MIT Programa Ciencia De Datos Aprendizaje Automatico
  • Program PG Ciencia Datos Analitica Empresarial Curso Online
  • Mit Programa Ciencia De Datos Aprendizaje Automatico
  • Masters in Analytics from University of Arizona
  • PES University
  • Northwestern University
  • Deakin University
  • MIT IDSS University
  • Great Lakes University
  • University of Arizona
  • IIIT Delhi University
  • IIIT Hyderabad University
  • MIT Professional Education
  • Walsh College
  • Eller College Management:University of Arizona
  • FOM Germany
  • Clark University

Recommended AI Courses

MIT Logo

MIT No Code AI and Machine Learning Program

Learn Artificial Intelligence & Machine Learning from University of Texas. Get a completion certificate and grow your professional career.

Course Duration

AI and ML Program from UT Austin

Enroll in the PG Program in AI and Machine Learning from University of Texas McCombs. Earn PG Certificate and and unlock new opportunities

About AI Leaders

Top 12 ai leaders and researchers, 1. andrew ng, 2. fei-fei li, 3. andrej karpathy , 4. demis hassabis , 5. ian goodfellow , 6. yann lecun, 7. jeremy howard, 8. ruslan salakhutdinov , 9. geoffrey hinton , 10. alex smola , 11. rana el kaliouby , 12. daphne koller , top 12 ai leaders and researchers you should know in 2024.

AI Leaders and Researchers

About AI Leaders Top 12 AI Leaders and Researchers

Deep learning continues to produce advanced techniques with widespread applications faster than one can keep up with. Dozens of papers get uploaded to arXiv every day, and there are hundreds of scientists and engineers active in the field. To stay in the loop, we’ve put together a list of 12 innovators and researchers in the field that you could follow to know the progress brought by the discipline to science, industry and society. The list includes links to the website, LinkedIn, Twitter account, Facebook profile and Google Scholar Profile of the AI Leaders for you to follow. 

These are the top 12 AI Leaders list to watch in 2022

  • Andrej Karpathy
  • Demis Hassabis 
  • Ian Goodfellow 
  • Jeremy Howard
  • Ruslan Salakhutdinov
  • Geoffrey Hinton
  • Alex Smola 
  • Rana el Kaliouby
  • Daphne Koller 

Founder and CEO of Landing AI, Founder of deeplearning.ai. 

Website: https://www.andrewng.org , Twitter: @AndrewYNg , Facebook: Andrew Ng , Google Scholar . 

Andrew was a co-founder and head of Google Brain. He was also the Chief Scientist at Baidu and led the company’s AI group. He is a pioneer in online education as a co-founder of deeplearning.ai and Coursera – the world’s largest MOOC platform, which started off with more than 100,000 students enrolling for his popular courseCS229A. Dr Ng has touched countless lives through his work as a computer scientist which led to him being named as one of Time magazine’s 100 most influential people in 2012.  

Dr Ng’s research is mainly in fields such as Machine learning, deep learning, computer vision, machine perception and natural language processing. His papers which frequently won the best paper award at academic conferences, eventually made him hugely popular and influential among computer scientists and had a massive influence in the field of AI, robotics and computer vision. 

Some of his most well-known work as one of the top AI Leaders include his Autonomous Helicopter Project at Stanford and the Stanford Artificial Intelligence Robot project, which ended up producing an open-source robotics software platform that is widely used today. The Google brain project, which he founded in 2011, used artificial neural networks that were trained using deep learning. The distributed computer with 16,000 CPU cores learnt how to recognize catches from watching YouTube videos and not being taught what a cat really is. The technology which comes from the project is still used in the speech recognition system of Android Operating Systems. 

Sequoia Professor of Computer Science Stanford University

Stanford Profile , Twitter: @drfeifei , Google Scholar . 

Dr Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University. She is also the Co-Director of the Stanford Institute for human-centred Artificial Intelligence and a Co-Director of the Stanford Vision and Learning Lab. She was the Vice President at Google from Jan 2017 to September 2018 and served as the Chief Scientist in Artificial Intelligence/ Machine Learning at Google Cloud. 

Dr Li currently works in areas such as cognitively inspired AI, deep learning, machine learning, computer vision and Artificial Intelligence in healthcare that focuses on ambient intelligence systems. She has published more than 200 scientific articles in all the major journals and conferences and has also worked on cognitive and computational neuroscience in the past. ImageNet, an invention of Dr Li, is an important massive dataset and benchmarking drive responsible for expanding the latest frontiers of Artificial Intelligence and deep learning. 

Along with her technical contributions to the field, she is also a leading voice at the national level for advocating for diversity in AI and STEM. Dr Li is the chairperson and co-founder of AI4ALL, which is a non-profit focused on diversity and inclusion in AI education. She has received numerous awards and recognition for her work, including the ELLE Magazine’s 2017 Women in Tech, a Global Thinker of 2015 by Foreign Policy and the prestigious “Great Immigrants: The Pride of America” by Carnegie Foundation in 2016. 

Senior Director of Artificial Intelligence at Tesla

Website: https://karpathy.ai , Twitter: @karpathy , Google Scholar . 

Andrej Karpathy leads the team working on the neural networks of the Autopilot in Tesla’s cars. He worked previously at OpenAI as a research scientist on Deep Learning in Computer vision, Reinforcement Learning and Generative Modeling. Andrej worked with Fei-Fei Li for his PhD at Stanford, where he worked on Convolutional/Recurrent Neural Network architectures and their applications in Natural Language Processing and Computer Vision and their intersection. He also interned at Google working on large scale feature learning over YouTube videos. Andrej was the primary instructor at the Stanford class on Convolutional Neural Networks for Visual Recognition (CS231n), which he designed together with Fei-Fei. The Deep Learning Course was a huge success, with the number of students enrolled at 150 in 2015 to 750 in 2017. 

Andrej is active on social media, with 352.4K followers on Twitter. He is an enthusiastic blogger and is the developer of Deep Learning libraries in javascript. He also spends his spare time maintaining Arxiv, which is home to over 100,000 papers on machine learning accumulated over the last six years. 

Co-founder and CEO of Deep Mind 

Website: https://deepmind.com/about#leadership , Twitter: @demishassabis , Google Scholar . 

Demis Hassabis co-founded DeepMind, which is an Artificial Intelligence company inspired by neuroscience. Deep Mind was bought by Google in 2014 in their largest acquisition in Europe to date. Demis is now the Vice President of Engineering at Google DeepMind. He is the lead of all the general Artificial Intelligence efforts at Google and even the program AlphaGo which was the first ever to beat a professional at the game of Go. Deep Mind has contributed significantly to advancements in machine learning and produced several awards winning papers. 

As a former chess prodigy, Demis had an early start in the gaming industry by finishing the simulation game Theme Park at the age of 17. He graduated from Cambridge University in Computer Science and founded Elixir Studios, a pioneering video games company that produced award-winning games. He returned to academia for a PhD after a decade of leading successful technology startups. He completed his PhD in Cognitive neuroscience at University College London and post-doctorates at MIT and Harvard. 

Demis, as one of the top AI Leaders ,   worked in the field of autobiographical memory and amnesia, where he was the co-author of a number of papers that were influential in the field. His work on the episodic memory system, which relates to memory and imagination, received widespread coverage in the media. It was also listed as one of the top 10 breakthroughs of the year by the journal Science. 

Director of Machine Learning at Apple

Website: https://www.iangoodfellow.com/ , Twitter: @goodfellow_ian , Google Scholar . 

Ian is known in the field as a researcher in machine learning. He currently works for Apple as the director of machine learning. He was formerly a research scientist at Google brain and has made a number of contributions to the field of deep learning. 

Ian got his B.S. and M.S. in computer science from Stanford University, where he was under the supervision of Andrew Ng. He earned his PhD in the April of 2014 from the Université de Montréal, where he was under the supervision of Yoshua Bengio and Aaron Courville. After graduating, Ian joined Google, where he worked as part of the research team of Google brain. He then quit Google to join Open Ai, which was then still a new institute. In 2017 Ian returned to Google Research. 

Ian is the lead author of the textbook Deep learning and is most known for the generative adversarial networks that he invented. As part of his work at Google, he developed a system to enable automatic transcription of addresses from photos taken by Street View cars in Google Maps. He also demonstrated vulnerabilities in machine learning systems. The MIT Technology Review cited Ian Goodfellow as one of the 35 Innovators Under 35 in 2017. He was also listed as one of the 100 Global Thinkers by Foreign Policy in 2019. 

Chief AI Scientist at Facebook

Website: https://research.fb.com/people/lecun-yann/ , Twitter: @ylecun , Google Scholar . 

Yann LeCun is a computer scientist primarily known for his work in the field of machine learning, mobile robotics, computer vision, and computational neuroscience. Some of his more popular contributions are in optical character recognition and computer vision that uses convolutional neural networks. He is the founding father of convolutional nets and one of the primary creators of DjVu, an image compression technology. Yann LeCun received the Turing Award in 2018 along with Yoshua Bengio and Geoffrey Hinton for their contribution to Deep Learning. 

Some of LeCun’s well-known works include his machine learning methods. His Convolutional Neural Networks was a biologically inspired image recognition method which he applied to optical character recognition and handwriting recognition. Out of his work came the bank check recognition system that was used by NCR and other companies through which 10% of all checks in the United States passed in the later 1990s and early 2000s.  

LeCun joined AT&T Labs in 1996 as the head of the image processing research department. He worked mostly on the DjVu image compression technology, which would eventually be used by many websites to distribute scanned documents, with the Internet Archive being the most notable site. 

In 2012 LeCun became the founding director of the NYU Center for Data Science, and in 2013 he joined Facebook as the first director of AI research.

Founding Researcher at fas.ai

Website: https://www.fast.ai/about/#jeremy , Twitter: @jeremyphoward

 Jeremy Howard is an entrepreneur, developer, business strategist and educator. He is well known as the founding researcher at fast.ai and a Distinguished Research Scientist at the University of San Francisco.  He was previously the founder and CEO of Enlitic, the first company to apply deep learning to the field of medicine. Its success made it to MIT Tech Review’s world’s top 50 smartest companies for two years in a row. 

Jeremy’s career started as a management consultant at McKinsey & Company, where he remained for eight years before moving on to entrepreneurship. He then became the co-founder of FastMail in 1999, which was sold to Opera Software. FastMail, which was highly successful in Australia, was the first of its kind to allow users to integrate their known desktop clients. Later, he joined the online community of data scientists Kaggle as President and Chief Scientist. The company fast.ai, which he co-founded with Rachael Thomas, is a research institute that is focused on making Deep Learning more accessible to everyone. 

Howard’s company Enlitic was a pioneer in the field of medicine, where it made a medical diagnosis and improved process accuracy and speed by applying machine learning. The deep learning algorithms used by Enlitic can diagnose disease and illnesses, and Howard believes they can be as good or even outperform humans at the task. Jeremy appears regularly on Australia’s news programs and has created numerous tutorials on data science and web development. 

Associate Professor, Carnegie Mellon University 

Website: http://www.cs.cmu.edu/~rsalakhu/ , Twitter: @rsalakhu  

Ruslan Salakhutdinov is a Computer Science professor in the Machine Learning Department at Carnegie Mellon University and has previously held the position of the Director of AI Research at Apple. He specializes in the field of statistical machine learning, and his research interests include deep learning, probabilistic Graphical Models and Large-scale optimization, in which he has published papers. 

Salakhutdinov earned his PhD in machine learning from the University of Toronto in 2009. He spent two years on his postdoctoral at the Artificial Intelligence Lab at Massachusetts Institute of Technology, after which he joined the University of Toronto’s Department of Computer Science and Department of Statistics as an assistant professor.  He has received numerous awards such as the Connaught New Researcher Award, Early Researcher Award, Microsoft Research Faculty Fellowship, Alfred P. Sloan Research Fellowship, Google Faculty Research Award and Fellow of the Canadian Institute for Advanced Research.  

Computer Science Professor at University of Toronto

Website: http://www.cs.toronto.edu/~hinton/ , Twitter: @geoffreyhinton , Google Scholar . 

Geoffrey Hinton is one of the most famous AI Leaders in the world, with his work specializing in machine learning, Neural networks, Artificial intelligence, Cognitive science and Object recognition. Hinton is a cognitive psychologist and a computer scientist who is most known for his work on artificial neural networks. As a leading figure in the deep learning community, Hinton has divided his time working for Google Brain and the University of Toronto since 2013. AlexNet-an image recognition milestone which was designed with collaboration with his students, was a breakthrough in the field of computer vision. In 2018 Hinton received the Turing award along with Yann LeCun and Yoshua Bengio for their work on deep learning. The trip is often referred to as the “Godfathers of Deep Learning” or the “Godfathers of AI”. 

Hinton’s work looks into different ways neural networks can be used for machine learning, symbol processing and memory perception. He has over 200 peers reviewed publications that he has authored or co-authored. What distinguishes his work on artificial neural nets internationally is now they can learn by themselves without a human teacher. The research gives one of the first glimpses into brain-like structures that are truly autonomous and intelligent. Through his work, Hinton has found similarities between broken nets and brain damage which leads to the loss of names and characterization.  His work also examines mental imagery and comes up with puzzles that test creative intelligence and originality. 

Hinton received his Honorary Doctorate from the University of Edinburgh in 2001, and he was also the recipient of the IJCAI Award for Research Excellence lifetime-achievement award in 2005. 

Director, Amazon Web Services

Website: http://alex.smola.org , Twitter: @smolix , Google Scholar . 

Alex Smola has been the director of machine learning at Amazon Web Services since 2016. His work focuses on machine learning, statistical data analysis, computer vision, deep learning and NLP to design tools for data scientists.  Alex is an author for over 200 papers, edited five books and guided many PhD students and researchers. His primary interests are in deep learning, scalability of algorithms, statistical modelling and applications in document analysis, user modelling and many more. 

Alex received his master’s degree at the University of Technology, Munich, in 1996. He earned his doctoral degree at the University of Technology Berlin in computer science in 1998. He worked subsequently as a researcher at the IDA Group and the Australian National University’s Research School for Information Sciences and Engineering. He has also worked with tech giants such as Yahoo and Google as a researcher and taught at Carnegie Mellon University. In 2015 Alex co-founded the Marianas Labs and moved to Amazon Web Services in 2016 to build Artificial Intelligence and Machine Learning tools for the company. 

Alex as an AI leader is always on the lookout for talented interns and team members who are skilled at writing code, working with deep learning, writing efficient algorithms and are familiar with high-performance computer systems. 

CEO and Co-Founder of Affectiva 

Website: https://www.ted.com/speakers/rana_el_kaliouby , Twitter: @kaliouby , Google Scholar . 

Rana el Kaliouby is a pioneer in artificial intelligence and the founder and CEO of Affectiva. Her company which is the spinoff of an MIT media lab aims to integrate emotional intelligence into the digital experiences of users everywhere. She is the head of the emotions analytics team that has worked to develop emotion-sensing algorithms. They have also mined the largest database in the world on emotions and have put together 12 billion data points from videos of 2.9 million volunteers spread across 75 countries. The platform is used by many leading companies around the world to get metrics on consumer engagement. They are pioneering digital apps that are emotion-based for entertainment, enterprise, video communication and online education. 

Rana earned her PhD from Cambridge University, after which she joined as a research scientist at MIT media labs. She was instrumental in the application of emotion recognition technology in a number of different fields, including mental health. She quit MIT to co-found Affectiva, which is a pioneer in the field of Emotion AI. As an AI leader in the domain, it now works with 25% of the Fortune 500 companies. 

Forbes named Rana in their list of America’s Top 50 Women in Tech while she was included in the list of 40 under 40 by Fortune. Rana speaks frequently on the topic of ethics in Artificial intelligence and overcoming biases. Her mission is to integrate artificial emotional intelligence to ‘humanize technology’ and develop deep learning platforms for various aspects of emotions such as facial expressions and tone of voice to understand how users are feeling. 

Co-Founder of Coursera, Founder and CEO of insitro. 

Website: https://ai.stanford.edu/~koller , Twitter: @DaphneKoller , Google Scholar . 

Daphne is a computer scientist and a professor in the Department of Computer Science at Stanford University. She is most popularly known as the co-founder of Coursers, the world’s largest MOOC platform. Her primary research area is artificial intelligence and its application in biomedical sciences. Her work focuses on concepts such as decision making, inference learning and representation in applications pertaining to computer vision and computational biology. 

Daphne launched Coursera together with Andrew Ng in 2012, in which she served as co-CEO along with Ng. She then took up a role as President of Coursera and was recognized as Time magazine’s 100 most influential people in 2012. In 2014 she was also on the Fast Company’s Most Creative People 2014. In 2016 she left Coursera to become the chief computing officer at Calico. She later moved on in 2018 to start Insitro, which is a drug discovery startup. In 2009 she co-authored a textbook on probabilistic graphical models. She later offered the subject as a free online course in February of 2012.  

While this list doesn’t encompass the tremendous contributions of other giants in the field, it is sorted on the basis of the Twitter followers these leaders have online. With that said, there is much depth to be explored in the contributions of these 12 AI leaders, who also happen to have a lot of overlap in their research, work and interactions. 

Take up the PGP – Artificial Intelligence for Leaders Course by Great Learning and power ahead your career today. Learn from the best in the industry through Live Online Classes.

Avatar photo

Top Free Courses

Guide to Predictive Analytic: Definition, Core Concepts, Tools, and Use Cases

Guide to Predictive Analytics: Definition, Core Concepts, Tools, and Use Cases

Local Search Algorithm for AI: Everything You Need to Know

Local Search Algorithm For AI: Everything You Need To Know

2024’s Leading IT Courses For Aspiring Tech Leaders

2024’s Leading IT Courses For Aspiring Tech Enthusiasts

ai researcher

Top 10 In-Demand AI Jobs Roles and Skills For 2024

How to use chatGPT

OpenAI Unveils GPT-4o: A Leap in AI Capabilities

ai researcher

What is Artificial Intelligence in 2024?

Leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

The Power and Promise of Data Science and AI

Human face with data lines on the back of the head overlayed in Hopkins blue color

The institute focuses on research and education in data science, machine learning, and artificial intelligence across diverse fields, including neuroscience, precision medicine, climate resilience, sustainability, public-sector innovation, and social sciences.

Collaborate to develop data science and AI, and accelerate breakthroughs.

Seeking bold, interdisciplinary leaders who are redefining their fields.

Related News

ai researcher

Navigating AI in policy

DSAI logo

Data Science and AI Institute announces inaugural Demonstration Projects Award recipients

ai researcher

What to know about AI and the upcoming elections

How to Become an AI Engineer or Researcher

how to become an AI expert

According to the popular job posting website Indeed.com, machine learning engineers (a type of AI engineer) make an   average annual salary of $150,083 in the United States . Ziprecruiter.com, another job website, reports that AI engineers make a n average of $164,769 per year in the U.S.

There is no better time to pursue becoming an AI engineer. In this article, we’ll explore what it takes to launch a career in AI, including:

What Is Artificial Intelligence?

What does an ai engineer do.

  • Skills Needed to Become an AI Engineer

Steps to Becoming an AI Engineer

Artificial Intelligence (also commonly called “AI”) is a technology that mimics and performs tasks that would typically require human intelligence. AI is utilized for countless tasks such as speech recognition, language translation, decision-making, healthcare technology, and more. Advancements in AI are possible thanks to the surplus of data in our lives and advancements made in computer processing power.

The result of this technology is the luxury of self-driven cars, AI-led customer assistance, even things as seemingly simple as your email provider’s auto-correct and text editing functionality. AI gives way to opportunities that impact daily life, including breakthroughs that at one point might have only been dreamed of in science fiction but are now very much embedded in our everyday lives.

The typical tasks of an AI engineer will vary based on the industry they’ve chosen to work in. However, here are the common tasks that aspiring AI engineers could expect to perform.

  • Manage and set up product infrastructure and AI development projects.
  • Study and evolve data science prototypes.
  • Implement machine learning algorithms and AI tools within research opportunities based on current parameters.
  • Build AI models from scratch and assist in sharing knowledge of the model’s function.
  • Choose appropriate datasets and data representation methods.
  • Train computer systems and develop them further when needed.
  • Build data transformation infrastructure and automate the infrastructure that the data team uses frequently.
  • Transform machine learning models into application program interfaces so that applications can implement the API.
  • Work and communicate with other AI engineers or robotics teams as needed to communicate important processes and developments in the works.

Skills Needed to Become an AI Engineer or Researcher

A successful AI engineer will typically possess some if not all of these skills:

Programming Language Fluency -   An important skill set needed to become an AI engineer is learning how to write in multiple programming languages. While knowing Python and R is critical, it’s also necessary to have a strong understanding of data structures and basic algorithms alongside programming literacy.

Mathematical Skills -   Developing AI models will require confidence in calculating algorithms and a strong understanding in probability. AI programming will utilize statistics, calculus, linear algebra, and numerical analysis to help predict how AI programs will run.  

Data Management Ability -   A large element of the typical AI engineer work day is working with large amounts of data as well as working with big data technologies such as Spark or Hadoop that will help make sense of data programming.

Knowledge of Algorithms -   Having a strong knowledge of algorithms and their respective frameworks helps building AI models and implementing machine learning processes easier. This can be with structured or unstructured data so having a deep knowledge of algorithms is helpful for success.

Critical Thinking Skills -   AI engineers are consistently researching data and trends in order to develop new findings and create AI models. Being able to build a rapid prototype allows the engineer to brainstorm new approaches to the model and make improvements. The ability to think critically and quickly to make a project perform well is helpful for all AI engineers.

As with most career paths, there are some mandatory prerequisites prior to launching your AI engineering career. The steps to becoming an AI engineer typically require higher education and certifications.

Step 1: Obtain a Bachelor’s Degree in Computer Information Science

The first step in becoming an AI engineer involves learning the foundations of artificial intelligence: computer information science. Most employers will require a bachelor’s degree in   bachelor’s degree in computer information science   or computer science to demonstrate that you have mastered the basics of programming and algorithms.

Step 2: Sharpen Technological Fluency

In addition to programming, AI engineers should also have an understanding of software development, machine learning, robotics, data science, and more. These fundamentals will be covered while obtaining a bachelor's degree. 

Some individuals go on to earn a master’s degree in data analytics or mathematics.

Step 3: Seek a Position within the AI Field

Once you’ve achieved your higher education requirements and have developed the technological skills that an AI engineering job demands, it’s time to seek a position within the field of artificial intelligence. AI engineers can work for countless industries – robotics, health care and medicine, marketing and retail, education, government, and many more.

Someone proficient in the science of AI can choose to apply for a job as an AI developer, AI architect, machine learning engineer, data scientist, or AI researcher.

Step 4: Stay Current on AI Trends

As with any career in technology, the knowledge and capabilities of artificial intelligence are constantly evolving. It’s important to stay updated on current trends, new systems, and potential programming changes in order to create the best AI systems for the current market – and so that you stay marketable in your chosen career.

Start a Career in Artificial Intelligence with GMercyU!

AI engineering is a lucrative and exciting career choice, well suited for natural problem solvers and those who enjoy making sense of data and numbers. GMercyU can help you develop your computer science skills to set you up for success as an AI engineer with our   Computer Information Science program .

GMercyU’s dedicated, expert faculty will mentor you as you grow your skill set. In addition to hands-on learning, GMercyU AI students also explore the ethical challenges that these powerful technologies bring about, so that you can become a responsible innovator of future AI technologies.

Begin your AI career journey today!

  • Apply Apply
  • Request Info Request Info
  • Visit Visit

By using this website, you consent to the use of cookies.

See our Privacy Policy for more details.

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

MIT researchers advance automated interpretability in AI models

Press contact :.

Conceptual graphic of a human brain made from computer nodes, with abstract patterns resembling computer parts in the background

Previous image Next image

As artificial intelligence models become increasingly prevalent and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Interpreting the mechanisms underlying AI models enables us to audit them for safety and biases, with the potential to deepen our understanding of the science behind intelligence itself.

Imagine if we could directly investigate the human brain by manipulating each of its individual neurons to examine their roles in perceiving a particular object. While such an experiment would be prohibitively invasive in the human brain, it is more feasible in another type of neural network: one that is artificial. However, somewhat similar to the human brain, artificial models containing millions of neurons are too large and complex to study by hand, making interpretability at scale a very challenging task. 

To address this, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers decided to take an automated approach to interpreting artificial vision models that evaluate different properties of images. They developed “MAIA” (Multimodal Automated Interpretability Agent), a system that automates a variety of neural network interpretability tasks using a vision-language model backbone equipped with tools for experimenting on other AI systems.

“Our goal is to create an AI researcher that can conduct interpretability experiments autonomously. Existing automated interpretability methods merely label or visualize data in a one-shot process. On the other hand, MAIA can generate hypotheses, design experiments to test them, and refine its understanding through iterative analysis,” says Tamar Rott Shaham, an MIT electrical engineering and computer science (EECS) postdoc at CSAIL and co-author on a new paper about the research . “By combining a pre-trained vision-language model with a library of interpretability tools, our multimodal method can respond to user queries by composing and running targeted experiments on specific models, continuously refining its approach until it can provide a comprehensive answer.”

The automated agent is demonstrated to tackle three key tasks: It labels individual components inside vision models and describes the visual concepts that activate them, it cleans up image classifiers by removing irrelevant features to make them more robust to new situations, and it hunts for hidden biases in AI systems to help uncover potential fairness issues in their outputs. “But a key advantage of a system like MAIA is its flexibility,” says Sarah Schwettmann PhD ’21, a research scientist at CSAIL and co-lead of the research. “We demonstrated MAIA’s usefulness on a few specific tasks, but given that the system is built from a foundation model with broad reasoning capabilities, it can answer many different types of interpretability queries from users, and design experiments on the fly to investigate them.” 

Neuron by neuron

In one example task, a human user asks MAIA to describe the concepts that a particular neuron inside a vision model is responsible for detecting. To investigate this question, MAIA first uses a tool that retrieves “dataset exemplars” from the ImageNet dataset, which maximally activate the neuron. For this example neuron, those images show people in formal attire, and closeups of their chins and necks. MAIA makes various hypotheses for what drives the neuron’s activity: facial expressions, chins, or neckties. MAIA then uses its tools to design experiments to test each hypothesis individually by generating and editing synthetic images — in one experiment, adding a bow tie to an image of a human face increases the neuron’s response. “This approach allows us to determine the specific cause of the neuron’s activity, much like a real scientific experiment,” says Rott Shaham.

MAIA’s explanations of neuron behaviors are evaluated in two key ways. First, synthetic systems with known ground-truth behaviors are used to assess the accuracy of MAIA’s interpretations. Second, for “real” neurons inside trained AI systems with no ground-truth descriptions, the authors design a new automated evaluation protocol that measures how well MAIA’s descriptions predict neuron behavior on unseen data.

The CSAIL-led method outperformed baseline methods describing individual neurons in a variety of vision models such as ResNet, CLIP, and the vision transformer DINO. MAIA also performed well on the new dataset of synthetic neurons with known ground-truth descriptions. For both the real and synthetic systems, the descriptions were often on par with descriptions written by human experts.

How are descriptions of AI system components, like individual neurons, useful? “Understanding and localizing behaviors inside large AI systems is a key part of auditing these systems for safety before they’re deployed — in some of our experiments, we show how MAIA can be used to find neurons with unwanted behaviors and remove these behaviors from a model,” says Schwettmann. “We’re building toward a more resilient AI ecosystem where tools for understanding and monitoring AI systems keep pace with system scaling, enabling us to investigate and hopefully understand unforeseen challenges introduced by new models.” Peeking inside neural networks

The nascent field of interpretability is maturing into a distinct research area alongside the rise of “black box” machine learning models. How can researchers crack open these models and understand how they work? Current methods for peeking inside tend to be limited either in scale or in the precision of the explanations they can produce. Moreover, existing methods tend to fit a particular model and a specific task. This caused the researchers to ask: How can we build a generic system to help users answer interpretability questions about AI models while combining the flexibility of human experimentation with the scalability of automated techniques?

One critical area they wanted this system to address was bias. To determine whether image classifiers displayed bias against particular subcategories of images, the team looked at the final layer of the classification stream (in a system designed to sort or label items, much like a machine that identifies whether a photo is of a dog, cat, or bird) and the probability scores of input images (confidence levels that the machine assigns to its guesses). To understand potential biases in image classification, MAIA was asked to find a subset of images in specific classes (for example “labrador retriever”) that were likely to be incorrectly labeled by the system. In this example, MAIA found that images of black labradors were likely to be misclassified, suggesting a bias in the model toward yellow-furred retrievers.

Since MAIA relies on external tools to design experiments, its performance is limited by the quality of those tools. But, as the quality of tools like image synthesis models improve, so will MAIA. MAIA also shows confirmation bias at times, where it sometimes incorrectly confirms its initial hypothesis. To mitigate this, the researchers built an image-to-text tool, which uses a different instance of the language model to summarize experimental results. Another failure mode is overfitting to a particular experiment, where the model sometimes makes premature conclusions based on minimal evidence.

“I think a natural next step for our lab is to move beyond artificial systems and apply similar experiments to human perception,” says Rott Shaham. “Testing this has traditionally required manually designing and testing stimuli, which is labor-intensive. With our agent, we can scale up this process, designing and testing numerous stimuli simultaneously. This might also allow us to compare human visual perception with artificial systems.”

“Understanding neural networks is difficult for humans because they have hundreds of thousands of neurons, each with complex behavior patterns. MAIA helps to bridge this by developing AI agents that can automatically analyze these neurons and report distilled findings back to humans in a digestible way,” says Jacob Steinhardt, assistant professor at the University of California at Berkeley, who wasn’t involved in the research. “Scaling these methods up could be one of the most important routes to understanding and safely overseeing AI systems.”

Rott Shaham and Schwettmann are joined by five fellow CSAIL affiliates on the paper: undergraduate student Franklin Wang; incoming MIT student Achyuta Rajaram; EECS PhD student Evan Hernandez SM ’22; and EECS professors Jacob Andreas and Antonio Torralba. Their work was supported, in part, by the MIT-IBM Watson AI Lab, Open Philanthropy, Hyundai Motor Co., the Army Research Laboratory, Intel, the National Science Foundation, the Zuckerman STEM Leadership Program, and the Viterbi Fellowship. The researchers’ findings will be presented at the International Conference on Machine Learning this week.

Share this news article on:

Related links.

  • Jacob Andreas
  • Sarah Schwettmann
  • Tamar Rott Shaham
  • Antonio Torralba
  • Computer Science and Artificial Intelligence Laboratory (CSAIL)
  • MIT-IBM Watson AI Lab
  • Department of Electrical Engineering and Computer Science

Related Topics

  • Computer science and technology
  • Artificial intelligence
  • Computer vision
  • Electrical Engineering & Computer Science (eecs)
  • National Science Foundation (NSF)

Related Articles

Four sequential LLM illustrations of a gray swivel chair, each more complete than the last. Beneath each is a word balloon. The first one says “Draw a swivel chair.”  Subsequent commands are “Draw a rectangular backrest,” “Make the chair more accurate,” and “Give it a more aesthetically pleasing appearance."

Understanding the visual knowledge of language models

Digital illustration of a white robot with a magnifying glass, looking at a circuit-style display of a battery with a brain icon. The room resembles a lab with a white table, and there are two tech-themed displays on the wall showing abstract neural structures in glowing turquoise. A wire connects the robot's magnifying glass to the larger display.

AI agents help explain other AI systems

Illustration of two head-shaped icons, one representing an AI with a word balloon and the other a human with a check mark. The icons repeat indefinitely into the background.

A method to interpret AI might not be so interpretable after all

Illustration of three human-like individuals in suits, with heads resembling computers and wires, sitting around at a table

Multi-AI collaboration helps reasoning and factual accuracy in large language models

Previous item Next item

More MIT News

Joshua Bennett

The study and practice of being human

Read full story →

The colorful assemblage of cables and circuits that make up a quantum computer are shown in close detail.

Testing spooky action at a distance

On a sunny day in the Arctic, 15 people bundled in cold-weather gear stand in a line with flags for Australia, France, Canada, United Kingdom, and USA waving on poles above them.

Researchers return to Arctic to test integrated sensor nodes

Marcel Torne Villasevil and Pulkit Agrawal stand in front of a robotic arm, which is picking up a cup

Precision home robots learn with real-to-sim-to-real

A schematic of the shoe shows the different parts of it, including the new sole that has sensors.

Helping Olympic athletes optimize their performance, one stride at a time

Thermometers are connected by lines to create a stylized neural-network

Method prevents an AI model from being overconfident about wrong answers

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Analyze research papers at superhuman speed

Search for research papers, get one sentence abstract summaries, select relevant papers and search for more like them, extract details from papers into an organized table.

ai researcher

Find themes and concepts across many papers

Don't just take our word for it.

ai researcher

Tons of features to speed up your research

Upload your own pdfs, orient with a quick summary, view sources for every answer, ask questions to papers, research for the machine intelligence age, pick a plan that's right for you, get in touch, enterprise and institutions, common questions. great answers., how do researchers use elicit.

Over 2 million researchers have used Elicit. Researchers commonly use Elicit to:

  • Speed up literature review
  • Find papers they couldn’t find elsewhere
  • Automate systematic reviews and meta-analyses
  • Learn about a new domain

Elicit tends to work best for empirical domains that involve experiments and concrete results. This type of research is common in biomedicine and machine learning.

What is Elicit not a good fit for?

Elicit does not currently answer questions or surface information that is not written about in an academic paper. It tends to work less well for identifying facts (e.g. "How many cars were sold in Malaysia last year?") and in theoretical or non-empirical domains.

What types of data can Elicit search over?

Elicit searches across 125 million academic papers from the Semantic Scholar corpus, which covers all academic disciplines. When you extract data from papers in Elicit, Elicit will use the full text if available or the abstract if not.

How accurate are the answers in Elicit?

A good rule of thumb is to assume that around 90% of the information you see in Elicit is accurate. While we do our best to increase accuracy without skyrocketing costs, it’s very important for you to check the work in Elicit closely. We try to make this easier for you by identifying all of the sources for information generated with language models.

How can you get in contact with the team?

You can email us at [email protected] or post in our Slack community ! We log and incorporate all user comments, and will do our best to reply to every inquiry as soon as possible.

What happens to papers uploaded to Elicit?

When you upload papers to analyze in Elicit, those papers will remain private to you and will not be shared with anyone else.

How accurate is Elicit?

Training our models on specific tasks, searching over academic papers, making it easy to double-check answers, save time, think more. try elicit for free..

A free, AI-powered research tool for scientific literature

  • Kyle Bagwell
  • Hedge Funds
  • Classical Conditioning

New & Improved API for Developers

Introducing semantic reader in beta.

Stay Connected With Semantic Scholar Sign Up What Is Semantic Scholar? Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI.

  • Work & Careers
  • Life & Arts

JPMorgan pitches in-house chatbot as AI-based research analyst

To read this article for free, register now.

Once registered, you can: • Read free articles • Get our Editor's Digest and other newsletters • Follow topics and set up personalised events • Access Alphaville: our popular markets and finance blog

Explore more offers.

Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.

FT Digital Edition

Today's FT newspaper for easy reading on any device. This does not include ft.com or FT App access.

  • Global news & analysis
  • Expert opinion

Standard Digital

Essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%.

  • FT App on Android & iOS
  • FT Edit app
  • FirstFT: the day's biggest stories
  • 20+ curated newsletters
  • Follow topics & set alerts with myFT
  • FT Videos & Podcasts

Terms & Conditions apply

Explore our full range of subscriptions.

Why the ft.

See why over a million readers pay to read the Financial Times.

Mobile Menu Overlay

The White House 1600 Pennsylvania Ave NW Washington, DC 20500

FACT SHEET: Biden- ⁠ Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on   AI

Nine months ago, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). This Executive Order built on the voluntary commitments he and Vice President Harris received from 15 leading U.S. AI companies last year. Today, the administration announced that Apple has signed onto the voluntary commitments, further cementing these commitments as cornerstones of responsible AI innovation. In addition, federal agencies reported that they completed all of the 270-day actions in the Executive Order on schedule, following their on-time completion of every other task required to date . Agencies also progressed on other work directed for longer timeframes. Following the Executive Order and a series of calls to action made by Vice President Harris as part of her major policy speech before the Global Summit on AI Safety, agencies all across government have acted boldly. They have taken steps to mitigate AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more. Actions that agencies reported today as complete include the following: Managing Risks to Safety and Security: Over 270 days, the Executive Order directed agencies to take sweeping action to address AI’s safety and security risks, including by releasing vital safety guidance and building capacity to test and evaluate AI. To protect safety and security, agencies have:

  • Released for public comment new technical guidelines from the AI Safety Institute (AISI) for leading AI developers in managing the evaluation of misuse of dual-use foundation models. AISI’s guidelines detail how leading AI developers can help prevent increasingly capable AI systems from being misused to harm individuals, public safety, and national security, as well as how developers can increase transparency about their products.
  • Published final frameworks on managing generative AI risks and securely developing generative AI systems and dual-use foundation models. These documents by the National Institute of Standards and Technology (NIST) will provide additional guidance that builds on NIST’s AI Risk Management Framework, which offered individuals, organizations, and society a framework to manage AI risks and has been widely adopted both in the U.S. and globally. NIST also submitted a report to the White House outlining tools and techniques to reduce the risks from synthetic content.
  • Developed and expanded AI testbeds and model evaluation tools at the Department of Energy (DOE). DOE, in coordination with interagency partners, is using its testbeds to evaluate AI model safety and security, especially for risks that AI models might pose to critical infrastructure, energy security, and national security. DOE’s testbeds are also being used to explore novel AI hardware and software systems, including privacy-enhancing technologies that improve AI trustworthiness. The National Science Foundation (NSF) also launched an initiative to help fund researchers outside the federal government design and plan AI-ready testbeds.
  • Reported results of piloting AI to protect vital government software.  The Department of Defense (DoD) and Department of Homeland Security (DHS) reported findings from their AI pilots to address vulnerabilities in government networks used, respectively, for national security purposes and for civilian government. These steps build on previous work to advance such pilots within 180 days of the Executive Order.
  • Issued a call to action from the Gender Policy Council and Office of Science and Technology Policy to combat image-based sexual abuse, including synthetic content generated by AI. Image-based sexual abuse has emerged as one of the fastest growing harmful uses of AI to-date, and the call to action invites technology companies and other industry stakeholders to curb it. This call flowed from Vice President Harris’s remarks in London before the AI Safety Summit, which underscored that deepfake image-based sexual abuse is an urgent threat that demands global action.

Bringing AI Talent into Government Last year, the Executive Order launched a government-wide AI Talent Surge that is bringing hundreds of AI and AI-enabling professionals into government. Hired individuals are working on critical AI missions, such as informing efforts to use AI for permitting, advising on AI investments across the federal government, and writing policy for the use of AI in government.

  • To increase AI capacity across the federal government for both national security and non-national security missions, the AI Talent Surge has made over 200 hires to-date, including through the Presidential Innovation Fellows AI cohort and the DHS AI Corps .
  • Building on the AI Talent Surge 6-month report , the White House Office of Science and Technology Policy announced new commitments from across the technology ecosystem, including nearly $100 million in funding, to bolster the broader public interest technology ecosystem and build infrastructure for bringing technologists into government service.

Advancing Responsible AI Innovation President Biden’s Executive Order directed further actions to seize AI’s promise and deepen the U.S. lead in AI innovation while ensuring AI’s responsible development and use across our economy and society. Within 270 days, agencies have:

  • Prepared and will soon release a report on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, including related policy recommendations. The Department of Commerce’s report draws on extensive outreach to experts and stakeholders, including hundreds of public comments submitted on this topic.
  • Awarded over 80 research teams’ access to computational and other AI resources through the National AI Research Resource (NAIRR) pilot —a national infrastructure led by NSF, in partnership with DOE, NIH, and other governmental and nongovernmental partners, that makes available resources to support the nation’s AI research and education community. Supported projects will tackle deepfake detection, advance AI safety, enable next-generation medical diagnoses and further other critical AI priorities.
  • Released a guide for designing safe, secure, and trustworthy AI tools for use in education. The Department of Education’s guide discusses how developers of educational technologies can design AI that benefits students and teachers while advancing equity, civil rights, trust, and transparency. This work builds on the Department’s 2023 report outlining recommendations for the use of AI in teaching and learning.
  • Published guidance on evaluating the eligibility of patent claims involving inventions related to AI technology,  as well as other emerging technologies. The guidance by the U.S. Patent and Trademark Office will guide those inventing in the AI space to protect their AI inventions and assist patent examiners reviewing applications for patents on AI inventions.
  • Issued a report on federal research and development (R&D) to advance trustworthy AI over the past four years. The report by the National Science and Technology Council examines an annual federal AI R&D budget of nearly $3 billion.
  • Launched a $23 million initiative to promote the use of privacy-enhancing technologies to solve real-world problems, including related to AI.  Working with industry and agency partners, NSF will invest through its new Privacy-preserving Data Sharing in Practice program in efforts to apply, mature, and scale privacy-enhancing technologies for specific use cases and establish testbeds to accelerate their adoption.
  • Announced millions of dollars in further investments to advance responsible AI development and use throughout our society. These include $30 million invested through NSF’s Experiential Learning in Emerging and Novel Technologies program—which supports inclusive experiential learning in fields like AI—and $10 million through NSF’s ExpandAI program, which helps build capacity in AI research at minority-serving institutions while fostering the development of a diverse, AI-ready workforce.

Advancing U.S. Leadership Abroad President Biden’s Executive Order emphasized that the United States lead global efforts to unlock AI’s potential and meet its challenges. To advance U.S. leadership on AI, agencies have:

  • Issued a comprehensive plan for U.S. engagement on global AI standards.  The plan, developed by the NIST, incorporates broad public and private-sector input, identifies objectives and priority areas for AI standards work, and lays out actions for U.S. stakeholders including U.S. agencies. NIST and others agencies will report on priority actions in 180 days. 
  • Developed guidance for managing risks to human rights posed by AI. The Department of State’s “Risk Management Profile for AI and Human Rights”—developed in close coordination with NIST and the U.S. Agency for International Development—recommends actions based on the NIST AI Risk Management Framework to governments, the private sector, and civil society worldwide, to identify and manage risks to human rights arising from the design, development, deployment, and use of AI. 
  • Launched a global network of AI Safety Institutes and other government-backed scientific offices to advance AI safety at a technical level. This network will accelerate critical information exchange and drive toward common or compatible safety evaluations and policies.
  • Launched a landmark United Nations General Assembly resolution . The unanimously adopted resolution, with more than 100 co-sponsors, lays out a common vision for countries around the world to promote the safe and secure use of AI to address global challenges.
  • Expanded global support for the U.S.-led Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy.   Fifty-five nations now endorse the political declaration, which outlines a set of norms for the responsible development, deployment, and use of military AI capabilities.

The Table below summarizes many of the activities that federal agencies have completed in response to the Executive Order:

ai researcher

Stay Connected

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Delegate Tracker
  • AP & Elections
  • 2024 Paris Olympic Games
  • Auto Racing
  • Movie reviews
  • Book reviews
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Japan rivals Nissan and Honda will share EV components and AI research as they play catch up

Image

Nissan Chief Executive Makoto Uchida, left, and Honda Chief Executive Toshihiro Mibe shake hands during a joint news conference in Tokyo, Thursday, Aug. 1, 2024. Japanese automakers Nissan and Honda say they plan to share components for electric vehicles like batteries and jointly research software for autonomous driving. (Kyodo News via AP)

FILE - Logos at a Nissan showroom are seen in Ginza shopping district in Tokyo, March 31, 2023. (AP Photo/Eugene Hoshiko, File)

FILE - Logos of Honda Motor Co. are pictured in Tsukuba, northeast of Tokyo, on Feb. 13, 2019. (Kyodo News via AP, File)

  • Copy Link copied

Image

TOKYO (AP) — Japanese automakers Nissan and Honda say they plan to share components for electric vehicles like batteries and jointly research software for autonomous driving.

A third Japanese manufacturer, Mitsubishi Motors Corp., has joined the Nissan-Honda partnership, sharing the view that speed and size are crucial in responding to dramatic changes in the auto industry centered around electrification.

A preliminary agreement between Nissan Motor Co. and Honda Motor Co. was announced in March .

After 100 days of talks, executives of the companies evinced a sense of urgency. Japanese automakers dominated the era of gasoline engines in recent decades but have fallen behind formidable new players in green cars like Tesla of the U.S. and China’s BYD.

“Companies that don’t adapt to the changes cannot survive,” said Honda Chief Executive Toshihiro Mibe. “If we try to do everything on our own, we cannot catch up.”

Nissan and Honda will use the same batteries and adopt the same specifications for motors and inverters for EV axels, they said.

By coming together in what Mibe and counterpart at Nissan, Makoto Uchida, repeatedly called “making friends” to achieve economies of scale, the companies plan more strategic investments in technology and aim to cut costs by boosting volume.

Image

Each company will continue to produce and offer its own model offerings. But they will share resources in areas like components and software development, where “making friends” will be a plus, Mibe and Uchida told reporters.

They declined to say whether the friendship will extend to a mutual capital ownership, while noting that wasn’t ruled out.

The two companies also agreed to have their model lineups “mutually complement” each other in various global markets, including both internal combustion engine vehicles and EVs. Details on that are being worked out, the companies said.

Honda and Nissan will also work together on energy services in Japan. Under Thursday’s announcements, Mitsubishi will join as a third member.

Toyota Motor Corp. , Japan’s top automaker, is not part of the three-way collaboration.

Although Honda and Nissan have very different corporate cultures, it became clear, as their discussions on working together continued, their engineers and other workers on the ground have a lot in common, Uchida said.

“Speed is the most crucial element, considering our size,” he added.

Uchida and Mibe repeatedly stressed speed, openly admitting BYD is moving very quickly, but they said there was still time to catch up and remain in the game.

“In coming together, we will show that one plus one will add up to become more than two,” Uchida said.

Yuri Kageyama is on X: https://twitter.com/yurikageyama

Image

Apple used Google's chips to train two AI models, research paper shows

  • Medium Text

Apple logo at an Apple store in Paris

Sign up here.

Reporting by Max A. Cherney in San Francisco; Editing by Matthew Lewis and Varun H K

Our Standards: The Thomson Reuters Trust Principles. , opens new tab

Illustration shows Cerebras Systems logo

Merck raises outlook for electronics and healthcare, sees improved chip market demand

Merck KGaA is seeing a faster-than-expected improvement in its semiconductor materials business and is aiming to accelerate its drug development pipeline, the German pharma and technology company said on Thursday.

Time magazine celebrates its 100 Most Influential People, in New York

Shield

Jul. 30, 2024

Breaking mad: generative ai could break the internet, training ai systems on synthetic data could include negative consequences.

AI-generated data

Generative artificial intelligence (AI) models like OpenAI’s GPT-4o or Stability AI’s Stable Diffusion are surprisingly capable at creating new text, code, images and videos. Training them, however, requires such vast amounts of data that developers are already running up against supply limitations and may soon exhaust training resources altogether.

Against this backdrop of data scarcity, using synthetic data to train future generations of the AI models may seem like an alluring option to big tech for a number of reasons, including: AI-synthesized data is cheaper than real-world data and virtually limitless in terms of supply; it poses fewer privacy risks (as in the case of medical data); and in some cases, synthetic data may even improve AI performance.

However, recent work by the Digital Signal Processing group at Rice University has found that a diet of synthetic data can have significant negative impacts on generative AI models’ future iterations.

progressive artifact amplification in AI-generated dataset

“The problems arise when this synthetic data training is, inevitably, repeated, forming a kind of a feedback loop ⎯ what we call an autophagous or ‘self-consuming’ loop,” said Richard Baraniuk , Rice’s C. Sidney Burrus Professor of Electrical and Computer Engineering. “Our group has worked extensively on such feedback loops, and the bad news is that even after a few generations of such training, the new models can become irreparably corrupted. This has been termed ‘model collapse’ by some ⎯ most recently by colleagues in the field in the context of large language models (LLMs). We, however, find the term ‘Model Autophagy Disorder’ (MAD) more apt, by analogy to mad cow disease .”

Mad cow disease is a fatal neurodegenerative illness that affects cows and has a human equivalent caused by consuming infected meat. A major outbreak in the 1980-90s brought attention to the fact that mad cow disease proliferated as a result of the practice of feeding cows the processed leftovers of their slaughtered peers ⎯ hence the term “autophagy,” from the Greek auto-, which means “self,”’ and phagy ⎯ “to eat.”

schematic

“We captured our findings on MADness in a paper presented in May at the International Conference on Learning Representations (ICLR) ,” Baraniuk said.

The study, titled “ Self-Consuming Generative Models Go MAD ,” is the first peer-reviewed work on AI autophagy and focuses on generative image models like the popular DALL·E 3, Midjourney and Stable Diffusion. “We chose to work on visual AI models to better highlight the drawbacks of autophagous training, but the same mad cow corruption issues occur with LLMs, as other groups have pointed out,” Baraniuk said.

The internet is usually the source of generative AI models’ training datasets, so as synthetic data proliferates online, self-consuming loops are likely to emerge with each new generation of a model. To get insight into different scenarios of how this might play out, Baraniuk and his team studied three variations of self-consuming training loops designed to provide a realistic representation of how real and synthetic data are combined into training datasets for generative models:

● fully synthetic loop ⎯ Successive generations of a generative model were fed a fully synthetic data diet sampled from prior generations’ output.

● synthetic augmentation loop ⎯ The training dataset for each generation of the model included a combination of synthetic data sampled from prior generations and a fixed set of real training data.

● fresh data loop ⎯ Each generation of the model is trained on a mix of synthetic data from prior generations and a fresh set of real training data.

AI-generated dataset without sampling bias

Progressive iterations of the loops revealed that, over time and in the absence of sufficient fresh real data, the models would generate increasingly warped outputs lacking either quality, diversity or both. In other words, the more fresh data, the healthier the AI.

Side-by-side comparisons of image datasets resulting from successive generations of a model paint an eerie picture of potential AI futures. Datasets consisting of human faces become increasingly streaked with gridlike scars ⎯ what the authors call “generative artifacts” ⎯ or look more and more like the same person. Datasets consisting of numbers morph into indecipherable scribbles.

“Our theoretical and empirical analyses have enabled us to extrapolate what might happen as generative models become ubiquitous and train future models in self-consuming loops,” Baraniuk said. “Some ramifications are clear: without enough fresh real data, future generative models are doomed to MADness.”

AI-generated dataset with sampling biaws

To make these simulations even more realistic, the researchers introduced a sampling bias parameter to account for “cherry picking” ⎯ the tendency of users to favor data quality over diversity, i.e. to trade off variety in the types of images and texts in a dataset for images or texts that look or sound good. The incentive for cherry picking is that data quality is preserved over a greater number of model iterations, but this comes at the expense of an even steeper decline in diversity.

“One doomsday scenario is that if left uncontrolled for many generations, MAD could poison the data quality and diversity of the entire internet,” Baraniuk said. “Short of this, it seems inevitable that as-to-now-unseen unintended consequences will arise from AI autophagy even in the near term.”

AI-generated dataset

In addition to Baraniuk, study authors include Rice Ph.D. students Sina Alemohammad ; Josue Casco-Rodriguez ; Ahmed Imtiaz Humayun ; Hossein Babaei ; Rice Ph.D. alumnus Lorenzo Luzi ; Rice Ph.D. alumnus and current Stanford postdoctoral student Daniel LeJeune ; and Simons Postdoctoral Fellow Ali Siahkoohi .

The research was supported by the National Science Foundation, Office of Naval Research, the Air Force Office of Scientific Research and the Department of Energy.

Self-Consuming Generative Models Go MAD | International Conference on Learning Representations (ICLR), 2024 | https://openreview.net/pdf?id=ShjMHfmPs0 Authors: Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi and Richard Baraniuk

https://news-network.rice.edu/news/files/2024/07/progressive_artifact_amplification.jpg CAPTION: Generative artificial intelligence (AI) models trained on synthetic data generate outputs that are progressively marred by artifacts. In this example, the researchers trained a succession of StyleGAN-2 generative models using fully synthetic data. Each of the six image columns displays a couple of examples generated by the first, third, fifth and ninth generation model, respectively. With each iteration of the loop, the cross-hatched artifacts become progressively amplified. (Image courtesy of Digital Signal Processing Group/Rice University)

https://news-network.rice.edu/news/files/2024/07/schematic.jpg CAPTION: Richard Baraniuk and his team at Rice University studied three variations of self-consuming training loops designed to provide a realistic representation of how real and synthetic data are combined into training datasets for generative models. Schematic illustrates the three training scenarios, i.e. a fully synthetic loop, a synthetic augmentation loop (synthetic + fixed set of real data), and a fresh data loop (synthetic + new set of real data). (Image courtesy of Digital Signal Processing Group/Rice University) https://news-network.rice.edu/news/files/2024/07/without-sampling-bias.jpg CAPTION: Progressive transformation of a dataset consisting of numerals 1 through 9 across 20 model iterations of a fully synthetic loop without sampling bias (top panel), and corresponding visual representation of data mode dynamics for real (red) and synthetic (green) data (bottom panel). In the absence of sampling bias, synthetic data modes separate from real data modes and merge. This translates into a rapid deterioration of model outputs: If all numerals are fully legible in generation 1 (leftmost column, top panel), by generation 20 all images have become illegible (rightmost column, top panel). (Image courtesy of Digital Signal Processing Group/Rice University) https://news-network.rice.edu/news/files/2024/07/with-sampling-bias.jpg CAPTION: Progressive transformation of a dataset consisting of numerals 1 through 9 across 20 model iterations of a fully synthetic loop with sampling bias (top panel), and corresponding visual representation of data mode dynamics for real (red) and synthetic (green) data (bottom panel). With sampling bias, synthetic data modes still separate from real data modes, but, rather than merging, they collapse around individual, high-quality images. This translates into a prolonged preservation of higher quality data across iterations: All but a couple of the numerals are still legible by generation 20 (rightmost column, top panel). While sampling bias preserves data quality longer, this comes at the expense of data diversity. (Image courtesy of Digital Signal Processing Group/Rice University) https://news-network.rice.edu/news/files/2024/07/sampling-with-bias.jpg CAPTION: The incentive for cherry picking ⎯ the tendency of users to favor data quality over diversity ⎯ is that data quality is preserved over a greater number of model iterations, but this comes at the expense of an even steeper decline in diversity. Pictured are sample image outputs from a first, third and fifth generation model of fully synthetic loop with sampling bias parameter. With each iteration, the dataset becomes increasingly homogeneous. (Image courtesy of Digital Signal Processing Group/Rice University)

Intel

  • Editorial Standards
  • Reprints & Permissions

What Makes a Grammy Winner? Researchers Turn to AI to Provide Some Clues

Photo credit: AzmanL/Getty Images

Whether it’s the Oscars, the Tonys, or the Grammys, observers annually make predictions as to which actor, film, musical, or song will win these coveted awards—with forecasts based on what experts say impresses the voters. “Grammy voters love to give Record of the Year to a carefully crafted throwback jam,” the Los Angeles Times wrote ahead of this year’s Grammy Awards.

A team of New York University researchers has systematized this process by creating an algorithm that takes into account a song’s traits, such as its lyrics, along with other information, including Billboard rankings, to illuminate the variables of successful songs—specifically, those voted as winners for Song of the Year, Record of the Year, and Rap Song of the Year in 2021, 2022, and 2023. In doing so, the work goes beyond some previous methods by not only making predictions, but also by identifying the traits of Grammy winners.

“Spotting award-winning art is surely a subjective process and is complicated by the secrecy surrounding voters’ decisions,” says Anasse Bari , a clinical associate professor at New York University’s Courant Institute of Mathematical Science and the senior author of the study , which appears on IEEE Xplore, published by the Institute of Electrical and Electronics Engineers. “However, by taking into account what we know about the songs themselves—from their make-up to their popularity—we can pinpoint those likely to be celebrated.

“We think this AI tool could help to identify emerging artists and trends by unearthing music that is likely to be popular—and that otherwise might go undiscovered.”

In constructing the AI tool, the researchers created a dataset of nominees from 2004 to 2020 across three award categories—Song of the Year, Record of the Year, and Rap Song of the Year—totaling nearly 250 songs. They then combined a range of variables and trained AI algorithms to learn from these historical data, which included Billboard rankings and Google search volume (how frequently users searched for a nominated song in the year it was nominated).

The algorithm also took into account a song’s musical characteristics, using Spotify data deployed by previous studies, which included the following:

  • Acousticness: Whether or not the track is acoustic (i.e., reliant on non-electric instruments or sounds
  • Danceability: How suitable a track is for dancing
  • Energy: A perceptual measure of intensity and activity
  • Instrumentalness: A measure of the lack of vocals in a track
  • Speechiness: The presence of spoken words in a track

Finally, the AI tool included a song’s lyrics, using commonly deployed Natural Language Processing algorithms to capture words and the sentiments these words conveyed. The calculations revealed a song’s vocabulary diversity, its emotional tone (e.g., happy, sad, angry), and even profane language.

The researchers then determined if the resulting algorithm could generate a list of likely winners by identifying top three candidates drawn from all of the nominees for Song of the Year, Record of the Year, and Rap Song of the Year in each year of the studied period (2021-2023)—a total of 27 songs from among approximately 75 nominees. 

The results showed that the model accurately included all nine winning songs across the three categories in its top three list—among them, Billie Eilish’s “everything i wanted” (2021 Record of the Year), Silk Sonic’s “Leave the Door Open” (2022 Song of the Year), and Kendrick Lamar’s “The Heart Part 5” (2023 Rap Song of the Year). 

The authors add that some of the model’s predictions ran counter to those made by betting sites. For instance, Bonnie Raitt’s “Just Like That,” which the model placed in its top three for 2023 Song of the Year, was seen as one the songs least likely to win that year by gambling platforms. In addition, H.E.R.’s Grammy-winning “I Can’t Breathe,” which the model placed in its top three for 2021 Song of the Year, was viewed as a long shot by betting sites.

Interestingly, the predictive features varied among the categories. For Song of the Year, the most predictive features included energy, acousticness, and the peak Billboard position of the song. By contrast, for Record of the Year, the most predictive features included speechiness, profanity, and acousticness. For Rap Song of the Year, the most predictive features included vocabulary diversity, the number of words, and the happiness score. 

While the study’s authors caution that the algorithm is not a precise prediction tool that forecasts winners, it can nonetheless surface wide-ranging attributes associated with successful tunes.

“Our findings highlight the importance of considering multiple factors, such as popularity and music specific features, when predicting the winners of music awards,” says Bari, who leads the Courant Institute’s Predictive Analytics and AI Research Lab. “More broadly, the work shows the potential of using machine learning and data-driven techniques to gain insights into the factors that contribute to a song’s success.”

The study’s other authors were members of Bari’s AI research group in NYU’s Department of Computer Science: Rushabh Musthyala, Abhishek Narayanan, and Anirudh Nistala.

Press Contact

IMAGES

  1. 11 Female Researchers Who Made a Big Impact on Artificial Intelligence

    ai researcher

  2. Research Jobs in Artificial Intelligence

    ai researcher

  3. Award-winning photo of a male ai researcher working on a computer on

    ai researcher

  4. Award-winning photograph of a male ai researcher working with robots on

    ai researcher

  5. PhD in artificial intelligence in Germany

    ai researcher

  6. Europe’s first AI lab for bioscience to accelerate innovation

    ai researcher

VIDEO

  1. AI tools for phd scholar #ai #tools #researcher #research #scientist #students

  2. How to be hired as an AI researcher?

COMMENTS

  1. What is a AI Researcher? Explore the AI Researcher Career Path in 2024

    Learn what an AI Researcher does, how to become one, and what types of AI Researchers exist. Explore the role of AI Researcher in 2024, from theoretical to applied domains, and the skills and qualifications required.

  2. Six researchers who are shaping the future of artificial intelligence

    Meet the trailblazers who are advancing AI in fields such as medicine, education and astronomy. Learn about their achievements, challenges and visions for the future of AI.

  3. How To Become an AI Research Scientist: A Step-by-Step Guide

    Learn the roles, responsibilities, and skills of an AI research scientist, and follow the eight steps to pursue a career in this field. Find out the education, certification, and salary expectations for this profession.

  4. Artificial Intelligence

    Learn about the latest developments and applications of artificial intelligence from Caltech scientists and scholars. Explore the terms, concepts, and challenges of AI, and how it affects science and society.

  5. How to Become a AI Researcher in 2024 (Next Steps + Requirements)

    Learn the educational, technical, and professional requirements for pursuing a career in AI research. Explore the steps, skills, and opportunities to join the field of artificial intelligence and make a lasting impact.

  6. Research

    OpenAI is a research organization that aims to create artificial general intelligence (AGI), a system that can solve human-level problems. Learn about their research focus areas, past highlights, and featured roles in AI.

  7. Research

    Explore how Google AI researchers tackle the most challenging problems in computer science and share their findings with the field. Learn about their projects, publications, tools, and applications to Google products.

  8. Artificial Intelligence Research

    From exploring applications in healthcare and biomedicine to unraveling patterns and optimizing decision-making within complex systems, PLOS' interdisciplinary artificial intelligence research aims to capture cutting-edge methodologies, advancements, and breakthroughs in machine learning, showcasing diverse perspectives, interdisciplinary approaches, and societal and ethical implications.

  9. Research Scientist

    OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs ...

  10. Research and Teach AI

    Research and Teach AI. American researchers and educators are foundational to ensuring our nation's leadership in AI. The Biden-Harris Administration is investing in helping U.S. researchers and entrepreneurs build the next generation of safe, secure, and trustworthy AI, as well as supporting educators and institutions developing the future ...

  11. AI Researcher Career Path: Essential Steps & Qualifications

    To don the hat of an AI Researcher, one typically needs to journey through rigorous higher education. Essential for laying the groundwork, degrees in computer science, cognitive science, mathematics, or related fields form the bedrock of this career. The quest often begins with a bachelor's degree, but swiftly moves to more advanced studies ...

  12. OpenAI

    Navigating the challenges and opportunities of synthetic voices. Instant answers. Greater productivity. Endless inspiration. Try ChatGPT. We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.

  13. What does an AI research scientist do?

    An AI research scientist specializes in conducting research and development in the field of artificial intelligence (AI). These scientists work on advancing the understanding and capabilities of AI systems through theoretical exploration, experimentation, and innovation. They may work in academic institutions, research labs, or industry settings, collaborating with multidisciplinary teams to ...

  14. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  15. Stanford Artificial Intelligence Laboratory

    Stanford AI Lab. The Stanford Artificial Intelligence Laboratory (SAIL) has been a center of excellence for Artificial Intelligence research, teaching, theory, and practice since its founding in 1963. Latest News. Congratulations to Stanford AI Lab PhD student Dora Zhao for an ICML 2024 Best Paper Award!

  16. Top 12 AI Leaders and Researchers you Should Know in 2024

    Daphne Koller. 1. Andrew Ng. Founder and CEO of Landing AI, Founder of deeplearning.ai. Website: https://www.andrewng.org, Twitter: @AndrewYNg, Facebook: Andrew Ng, Google Scholar . Andrew was a co-founder and head of Google Brain. He was also the Chief Scientist at Baidu and led the company's AI group.

  17. Johns Hopkins Data Science and AI Institute

    The institute will serve as the nation's epicenter of research and education dedicated to the application, understanding, collection, and risks of data and the development of machine learning and artificial intelligence systems across a range of critical and emerging fields, from neuroscience and precision medicine to climate resilience and sustainability, public-sector innovation, and the ...

  18. What does an AI Researcher do? Role & Responsibilities

    What does an AI Researcher do? Machine learning engineers are the designers of self-running software that brings machines the ability to automate models that are predictive. They work with data scientists to take information and feed curated data into the models that they've uncovered or discovered. They use theoretical models within the data ...

  19. How to Become an AI Engineer or Researcher

    Learn what artificial intelligence is, what AI engineers do, and what skills and steps are needed to launch a career in AI. GMercyU offers a Computer Information Science program that can help you develop your AI expertise and ethics.

  20. MIT researchers advance automated interpretability in AI models

    As artificial intelligence models become increasingly prevalent and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. ... "Our goal is to create an AI researcher that can conduct interpretability experiments autonomously ...

  21. Elicit: The AI Research Assistant

    Use AI to search, summarize, extract data from, and chat with over 125 million papers. Used by over 2 million researchers in academia and industry.

  22. Semantic Scholar

    Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Semantic Scholar uses groundbreaking AI and engineering to understand the semantics of scientific literature to help Scholars discover relevant research.

  23. JPMorgan pitches in-house chatbot as AI-based research analyst

    JPMorgan Chase has begun rolling out a generative artificial intelligence product, telling employees that its own version of OpenAI's ChatGPT can do the work of a research analyst.

  24. FACT SHEET: Biden-Harris Administration Announces New AI Actions and

    Awarded over 80 research teams' access to computational and other AI resources through the National AI Research Resource (NAIRR) pilot—a national infrastructure led by NSF, in partnership with ...

  25. JPMorgan Gives Staff AI-Powered 'Research Analyst' Chatbot

    JPMorgan Chase & Co. has launched a generative artificial intelligence tool and told employees to think of it as a research analyst that can offer information, solutions and advice, according ...

  26. Japan rivals Nissan and Honda will share EV components and AI research

    Japan rivals Nissan and Honda will share EV components and AI research as they play catch up. 1 of 4 | Nissan Chief Executive Makoto Uchida, left, and Honda Chief Executive Toshihiro Mibe shake hands during a joint news conference in Tokyo, Thursday, Aug. 1, 2024. Japanese automakers Nissan and Honda say they plan to share components for ...

  27. Apple used Google's chips to train two AI models, research paper shows

    Apple relied on chips designed by Google rather than industry leader Nvidia to build two key components of its artificial intelligence software infrastructure for its forthcoming suite of AI tools ...

  28. Breaking MAD: Generative AI could break the internet

    Generative artificial intelligence (AI) models trained on synthetic data generate outputs that are progressively marred by artifacts. In this example, the researchers trained a succession of StyleGAN-2 generative models using fully synthetic data.

  29. AI For Impact: How Tech-Driven Sustainability Is Good For Business

    Discover how AI PCs powered by Intel can transform the workplace, from meeting recaps to instant presentations.

  30. What Makes a Grammy Winner? Researchers Turn to AI to Provide ...

    "We think this AI tool could help to identify emerging artists and trends by unearthing music that is likely to be popular—and that otherwise might go undiscovered." In constructing the AI tool, the researchers created a dataset of nominees from 2004 to 2020 across three award categories—Song of the Year, Record of the Year, and Rap ...