typedef
banner
typedef.ai
typedef
@typedef.ai
We are here to eat bamba and revolutionize the world of query engines. The Spark is gone, let's rethink data processing with a pinch of AI
Note: auto-routing is being explored; today you keep full control.
check the repo for more: github.com/typedef-ai/f...
GitHub - typedef-ai/fenic: Build reliable AI and agentic applications with DataFrames
Build reliable AI and agentic applications with DataFrames - typedef-ai/fenic
github.com
October 21, 2025 at 11:07 PM
Mix providers (OpenAI, Anthropic) with simple aliases

Use defaults for simple ops; override model_alias for complex ones

Balance cost/latency/quality without extra orchestration
October 21, 2025 at 11:07 PM
Teams often wire a single model and pay in either cost or quality.

With Fenic, you register multiple models once and select them per call.
October 21, 2025 at 11:07 PM
fenic's Multiple Model Configuration & Selection lets you pick the right model for each step, cheap where you can, powerful where you must.

Think of it as a per-operator model dial across your pipeline.
October 21, 2025 at 11:07 PM
Thanks to @danielvanstrien.bsky.social and @lhoestq.hf.co for the collaboration and feedback that made this possible and to David Youngworth you built and maintains the integration!
October 21, 2025 at 6:57 PM
A few things you can do with this new integration.

1. Rehydrate the same agent context anywhere (local → prod)
2. Versioned, auditable datasets for experiments & benchmarks
fenic
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
October 21, 2025 at 6:57 PM
Fenic ❤️ Hugging Face Datasets!

You can now turn any fenic snapshot into a shareable, versioned dataset on @hf.co perfect for reproducible agent contexts and data sandboxes.

Docs: huggingface.co/docs/hub/dat...
fenic
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
October 21, 2025 at 6:57 PM
"AI confidence is high — but production results still lag."
Our cofounder, Yoni Michael, shares why in CIO.

Read it here 👉 www.cio.com/article/4069...

#CIO #AIinEnterprise #Typedef
CIOs’ AI confidence yet to match results
While a large percentage of IT and business leaders believe their AI efforts will meet or exceed expectations, only a small number have successfully deployed projects thus far.
www.cio.com
October 20, 2025 at 11:07 PM
Common patterns: multi-step enrichment, RAG prep, nightly jobs with partial recomputes.

for more check the Github repo: github.com/typedef-ai/f...
September 24, 2025 at 1:38 AM
With fenic, it’s explicit and simple: call .cache() where it matters.

Protect pricey semantic ops (classify/extract) from re-execution

Reuse cached results across multiple downstream analyses

Recover from mid-pipeline failures without starting over
September 24, 2025 at 1:38 AM
Think of it as checkpointing for LLM workloads: cache after costly ops, restart from there if something fails.

Without caching, teams re-pay tokens and time on retries: flaky APIs, disk hiccups, long recomputes.
September 24, 2025 at 1:38 AM
fenic's Local Data Caching & Persistence keeps expensive AI steps from rerunning and your pipelines resilient.
September 24, 2025 at 1:38 AM
Mix providers (OpenAI, Anthropic) with simple aliases

Use defaults for simple ops; override model_alias for complex ones

Balance cost/latency/quality without extra orchestration
September 22, 2025 at 11:07 PM
Teams often wire a single model and pay in either cost or quality.

With Fenic, you register multiple models once and select them per call.
September 22, 2025 at 11:07 PM
fenic's Multiple Model Configuration & Selection lets you pick the right model for each step, cheap where you can, powerful where you must.

Think of it as a per-operator model dial across your pipeline.
September 22, 2025 at 11:07 PM
Why do most AI projects stall?
Because going from prototype → production is HARD.
On Data Exchange, we share how Typedef makes inference-first pipelines actually work at scale.
👉 thedataexchange.media/typedef-fenic/
The Fenic Approach to Production-Ready Data Processing
Kostas Pardalis on Inference-First Data Frames, Markdown as Structure, Semantic Query Operations, and Production AI Debugging.
thedataexchange.media
September 21, 2025 at 12:05 AM
We’re honored to be featured in AI World Today! 🚀
Our co-founder Yoni Michael shares how Typedef is closing the gap between AI prototypes and production, making inference a first-class data operation.
👉 Read the full interview: www.aiworldtoday.net/p/interview-...
Bridging the AI Gap: How Yoni Iny's Typedef is Revolutionizing Data Processing
Yoni Michael, tech veteran and Typedef co-founder, transforms AI-powered data analytics with an innovative serverless platform for LLM workflows.
www.aiworldtoday.net
September 20, 2025 at 9:36 PM
We’re building the AI-native, inference-first infrastructure that powers scalable, production-ready LLM pipelines—no infrastructure headaches, just reliable results. Read more in AIM about how we’re overcoming pilot paralysis: aimmediahouse.com/ai-startups/...
For AI to Scale, Infrastructure Has to Change-Typedef Gets It
Typedef, a new AI infrastructure startup that officially launched on June 18, 2025, raised $5.5 million in seed funding, led by Pear VC.
aimmediahouse.com
September 20, 2025 at 8:42 PM
Fenic brings the reliability of DataFrame pipelines to AI workloads—semantic joins, markdown parsing, transcripts, and more—now strengthened with the 0.3.0 update. Dive into the latest improvements. → www.techzine.eu/blogs/data-m...
Typedef project Fenic: A ‘dataframe’ for LLMs
Typedef provides purpose-built AI data infrastructure services for cloud workloads that need to handle LLM-powered pipelines, unstructured data Typedef  is Helping AI and Data Teams Build Faster,…
www.techzine.eu
September 20, 2025 at 3:57 PM
AI fatigue is everywhere. But it’s not inevitable.
In AI Journal, Typedef co-founder Yoni Michael shares how teams can escape “pilot paralysis” and move AI from prototype to production with confidence.
👉 Read the article: aijourn.com/ai-fatigue-i...
AI Fatigue Is Real, But It's Fixable | The AI Journal
Enterprises have embraced generative AI with high expectations – new business insights, automated agents, real-time decision-making. What many got instead are
aijourn.com
September 20, 2025 at 2:44 AM
Common patterns: review mining, invoice parsing, lead enrichment, spec extraction.

for more, check the GitHub repo: github.com/typedef-ai/f...
September 20, 2025 at 1:38 AM
Define a Pydantic schema; get type-checked structs (ints, bools, lists, Optionals)

Auto-prompting via function calling / structured outputs (OpenAI, Anthropic)

Use unnest() and explode() to work with the data—no manual JSON wrangling
September 20, 2025 at 1:38 AM
Most teams hand-roll JSON parsing, brittle regex, and post-hoc validators. That’s slow and error-prone.

With fenic, you keep it declarative.
September 20, 2025 at 1:38 AM
fenic's Structured Output Extraction turns LLM text into validated tables, directly in your DataFrame.

Think of it as schema-first parsing: you define a Pydantic model; Fenic enforces it and returns structured columns.
September 20, 2025 at 1:38 AM
Common patterns: doc mining, content ingestion, RAG prep, taxonomy extraction.

for more, including examples and documentation, check: github.com/typedef-ai/f...
September 18, 2025 at 1:38 AM