Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Congress considers its role regarding tariffs following Supreme Court ruling

February 23, 2026

President Trump’s new tariff threats cause economic uncertainty. Trade deal impasse trade war news

February 23, 2026

Rangers: Racist abuse of Jaydee Gassama and Emmanuel Fernandes labeled ‘unacceptable’ | Soccer News

February 23, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Guide Labs Debuts New Kind of Interpretable LLM
AI

Guide Labs Debuts New Kind of Interpretable LLM

Editor-In-ChiefBy Editor-In-ChiefFebruary 23, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


The challenge when discussing deep learning models is often understanding why the model behaves the way it does. Whether it’s xAI’s repeated struggle sessions to fine-tune Grok’s bizarre politics, or ChatGPT’s struggles with sycophants and mundane hallucinations, connecting neural networks with billions of parameters isn’t easy.

Guide Labs, a San Francisco startup founded by CEO Julius Adebayo and chief scientific officer Aya Abdelsalam Ismail, is providing an answer to that question today. On Monday, the company open sourced its 8 billion parameter LLM, Steelling-8B. This LLM was trained on a new architecture designed to easily interpret actions. All tokens generated by the model can be traced back to the origin of the LLM’s training data.

This can be as simple as determining the factual references that the model cites, or as complex as understanding the model’s understanding of humor or gender.

“If there are a trillion ways to encode gender, and I encode it in a billion of the trillion things that I have, I have to make sure that I can find all the billion things that I encoded. And I have to be able to reliably turn it on and turn it off,” Adebayo told TechCrunch. “You can do that with the current model, but it’s very fragile…It’s kind of one of those holy grail questions.”

Adebayo began this research while completing his PhD at MIT and is a co-author of a widely cited 2018 paper showing that existing methods of understanding deep learning models are unreliable. That work ultimately led to the creation of a new way to build an LLM. Developers insert conceptual layers into the model that classify data into categories that can be tracked. This requires more up-front data annotation, but by leveraging other AI models, we were able to train this model as our largest proof of concept to date.

“The kind of interpretability that people do… is model-based neuroscience, and we turn it on its head,” Adebayo said. “What we’re really doing is designing a model from scratch so that we don’t have to do any neuroscience.”

Image credit: Guide Labs

One concern with this approach is that it may eliminate some of the new behaviors that make LLM so interesting: the ability to generalize in new ways about things that have not yet been trained. Adebayo says that’s still happening in his company’s model. His team tracks what it calls “discovered concepts,” which models discover on their own, much like in quantum computing.

tech crunch event

boston, massachusetts
|
June 9, 2026

Adebayo argues that this interpretable architecture will be something everyone will need. For consumer LLMs, model builders will be able to use these techniques to block the use of copyrighted material and better control output on subjects such as violence and substance abuse. Regulated industries will require an LLM with more control, for example finance. Models that evaluate loan applicants should consider things like financial records, not race. There is also a need for interpretability in scientific research, another area where Guide Labs has developed technology. Protein folding has been a huge success for deep learning models, but scientists need more insight into why the software is finding promising combinations.

“This model shows that training interpretable models is no longer a kind of science, but an engineering problem,” Adebayo said. “We’ve cracked the science and we’ve been able to extend it. There’s no reason why this kind of model can’t match the performance of frontier-level models with more parameters.”

According to Guide Labs, Steelling-8B can achieve 90% of the power of existing models, but uses less training data thanks to its new architecture. The company’s next steps, which emerged from Y Combinator and raised a $9 million seed round from Initialized Capital in November 2024, are to build a larger model and start offering APIs and agent access to users.

“The way we currently train models is so primitive that democratizing inherent interpretability will actually be good for our role in humanity in the long run,” Adebayo told TechCrunch. “We’re chasing these models that are going to be super intelligent, so you don’t want something mysterious to you making decisions for you.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Particle’s AI news app listens to podcasts and finds interesting clips so you don’t have to

February 23, 2026

OpenAI brings in consultants to promote the company

February 23, 2026

Google’s Cloud AI leads three frontiers of model capabilities

February 23, 2026
Add A Comment

Comments are closed.

News

President Trump’s new tariff threats cause economic uncertainty. Trade deal impasse trade war news

By Editor-In-ChiefFebruary 23, 2026

The White House plans to impose a 15% tariff through Section 122 of the Trade…

Three killed in new attack on US military ship in Caribbean, Pentagon says | Military News

February 23, 2026

Latin America: The Shadow of the United States | Episode 3 – Chaos | Politics

February 23, 2026
Top Trending

Particle’s AI news app listens to podcasts and finds interesting clips so you don’t have to

By Editor-In-ChiefFebruary 23, 2026

An AI news app called Particle, developed by a former Twitter engineer,…

Guide Labs Debuts New Kind of Interpretable LLM

By Editor-In-ChiefFebruary 23, 2026

The challenge when discussing deep learning models is often understanding why the…

OpenAI brings in consultants to promote the company

By Editor-In-ChiefFebruary 23, 2026

OpenAI is an AI company aiming to grow enterprise businesses in 2026,…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.