Discovering your AI is technically helpful but functionally useless—a special kind of product failure that only shows up when someone actually tries to use the thing
#BuildAI
wordoflore.ai/langsmith-mu...
#BuildAI
wordoflore.ai/langsmith-mu...
LangSmith: multi-turn evaluation
Imagine you’re chatting with an AI, asking it to help you book a flight. It might give you the right answer to every single question you ask, but somehow, you still end up without a ticket. That’s whe...
wordoflore.ai
November 5, 2025 at 1:31 PM
Discovering your AI is technically helpful but functionally useless—a special kind of product failure that only shows up when someone actually tries to use the thing
#BuildAI
wordoflore.ai/langsmith-mu...
#BuildAI
wordoflore.ai/langsmith-mu...
[GN] OpenGPTs - OpenAI의 GPTs를 오픈소스로 구현
(by 9bow님)
https://d.ptln.kr/2875
#geeknews #llm #langchain #gpts #langsmith #langserve #opengpts
(by 9bow님)
https://d.ptln.kr/2875
#geeknews #llm #langchain #gpts #langsmith #langserve #opengpts
[GN] OpenGPTs - OpenAI의 GPTs를 오픈소스로 구현
GeekNews의 xguru님께 허락을 받고 GN에 올라온 글들 중에 AI 관련된 소식들을 공유하고 있습니다. 😺 소개 OpenAI가 제공하는 Sand Box, 커스텀 액션, 기본 도구(웹 브라우징, 이미지 생성, PythonREPL 등), 분석, 챗봇 Draft 보기, 챗봇 퍼블리싱, 공유 기능들을 비슷하게 구현 Knowledge Files 와 Marketplace 는 지원 예정 LangChain + LangServe + LangSmith LangChain이 제공하는 60+의 LLM을 선택 가능 LangSmith를 통해서 프롬프트 디버깅 다양한 벡터DB 와 챗 히스토리 DB 지원 4개의 에이전트 타입 지원 GPT 3.5 Turbo GPT 4 Azure OpenAI Claude 2 원문 OpenGPTs 저장소 LangChain 저장소 LangServe 저장소 LangSmi...
d.ptln.kr
November 18, 2023 at 6:51 AM
[GN] OpenGPTs - OpenAI의 GPTs를 오픈소스로 구현
(by 9bow님)
https://d.ptln.kr/2875
#geeknews #llm #langchain #gpts #langsmith #langserve #opengpts
(by 9bow님)
https://d.ptln.kr/2875
#geeknews #llm #langchain #gpts #langsmith #langserve #opengpts
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 6:14 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 5:46 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Разворачиваем Langfuse в Kubernetes: open-source альтернатива Langsmith...
https://habr.com/ru/articles/877070/?utm_source=habrahabr&utm;_medium=rss&utm;_campaign=877070
#langfuse #kubernetes #llm-приложения #мониторинг #приложений #postgresql #open-source #machine #learning
Event Attributes
https://habr.com/ru/articles/877070/?utm_source=habrahabr&utm;_medium=rss&utm;_campaign=877070
#langfuse #kubernetes #llm-приложения #мониторинг #приложений #postgresql #open-source #machine #learning
Event Attributes
January 27, 2025 at 4:05 PM
Разворачиваем Langfuse в Kubernetes: open-source альтернатива Langsmith...
https://habr.com/ru/articles/877070/?utm_source=habrahabr&utm;_medium=rss&utm;_campaign=877070
#langfuse #kubernetes #llm-приложения #мониторинг #приложений #postgresql #open-source #machine #learning
Event Attributes
https://habr.com/ru/articles/877070/?utm_source=habrahabr&utm;_medium=rss&utm;_campaign=877070
#langfuse #kubernetes #llm-приложения #мониторинг #приложений #postgresql #open-source #machine #learning
Event Attributes
<a href="https://speakerdeck.com/os1ma/langsmithwohuo-yong-sitaragnoping-jia-gai-shan-huronozheng-bei?utm_campaign=talk&utm_medium=email&utm_source=following" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">speakerdeck.com/os1...
LangSmithを活用したRAGの評価・改善フローの整備
LangSmithを活用したRAGの評価・改善フローの整備
LangSmithを活用したRAGの評価・改善フローの整備
2024年5月22日 #mlopsコミュニティ
speakerdeck.com
May 22, 2024 at 11:50 PM
<a href="https://speakerdeck.com/os1ma/langsmithwohuo-yong-sitaragnoping-jia-gai-shan-huronozheng-bei?utm_campaign=talk&utm_medium=email&utm_source=following" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">speakerdeck.com/os1...
LangSmithを活用したRAGの評価・改善フローの整備
LangSmithを活用したRAGの評価・改善フローの整備
Beyond grappling with the powers of the agent platform, this was an opportunity to:
- dive deeper into #a11y on the web,
- and dig into platforms like @langchain.bsky.social #Langsmith for debugging
- @rive.app for some (super kawaii) embellishments!
I'd love to hear what excites you :) ✌️
- dive deeper into #a11y on the web,
- and dig into platforms like @langchain.bsky.social #Langsmith for debugging
- @rive.app for some (super kawaii) embellishments!
I'd love to hear what excites you :) ✌️
AskPaige - What if screen readers were AI powered?
YouTube video by Jonah Goldsaito
youtu.be
January 30, 2025 at 2:57 PM
Beyond grappling with the powers of the agent platform, this was an opportunity to:
- dive deeper into #a11y on the web,
- and dig into platforms like @langchain.bsky.social #Langsmith for debugging
- @rive.app for some (super kawaii) embellishments!
I'd love to hear what excites you :) ✌️
- dive deeper into #a11y on the web,
- and dig into platforms like @langchain.bsky.social #Langsmith for debugging
- @rive.app for some (super kawaii) embellishments!
I'd love to hear what excites you :) ✌️
30-minute GenAI experiment? 💡⚙️
pip install langgraph langchain openai then wire an LLM node to a calc node.
LangSmith will trace every hop like a live subway cam. Spot logic gaps before users do.
Read more here:
open.substack.com/pub/simplyal...
pip install langgraph langchain openai then wire an LLM node to a calc node.
LangSmith will trace every hop like a live subway cam. Spot logic gaps before users do.
Read more here:
open.substack.com/pub/simplyal...
May 22, 2025 at 7:36 PM
30-minute GenAI experiment? 💡⚙️
pip install langgraph langchain openai then wire an LLM node to a calc node.
LangSmith will trace every hop like a live subway cam. Spot logic gaps before users do.
Read more here:
open.substack.com/pub/simplyal...
pip install langgraph langchain openai then wire an LLM node to a calc node.
LangSmith will trace every hop like a live subway cam. Spot logic gaps before users do.
Read more here:
open.substack.com/pub/simplyal...
"Building a Multi-Agent AI System with LangGraph and LangSmith"
A step-by-step guide to creating smarter AI with sub-agents.
by Fareed Khan
levelup.gitconnected.com/building-a-m...
A step-by-step guide to creating smarter AI with sub-agents.
by Fareed Khan
levelup.gitconnected.com/building-a-m...
Building a Multi-Agent AI System with LangGraph and LangSmith
A step-by-step guide to creating smarter AI with sub-agents
levelup.gitconnected.com
June 8, 2025 at 12:17 PM
"Building a Multi-Agent AI System with LangGraph and LangSmith"
A step-by-step guide to creating smarter AI with sub-agents.
by Fareed Khan
levelup.gitconnected.com/building-a-m...
A step-by-step guide to creating smarter AI with sub-agents.
by Fareed Khan
levelup.gitconnected.com/building-a-m...
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 6:05 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 7:15 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 | Gaudiy Tech Blog
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 - Gaudiy Tech Blog
こんにちは。ファンと共に時代を進める、Web3スタートアップ Gaudiy の seya (@sekikazu01)と申します。 この度 Gaudiy では LangSmith を使った評価の体験をいい感じにするライブラリ、langsmith-evaluation-helper を公開しました。 github.com 大まかな機能としては次のように config と、詳細は後で載せますが、LLMを実行する関数 or プロンプトテンプレートと評価を実行する関数を書いて description: Testing evaluations prompt: entry_function: toxic_e…
techblog.gaudiy.com
July 23, 2024 at 12:32 AM
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 | Gaudiy Tech Blog
LangSmithで始めるLLMアプリケーションのオンライン評価 | PharmaXテックブログのフィード
LangSmithで始めるLLMアプリケーションのオンライン評価
zenn.dev
August 20, 2024 at 2:32 AM
LangSmithで始めるLLMアプリケーションのオンライン評価 | PharmaXテックブログのフィード
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 6:56 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 6:30 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
<a href="https://techblog.gaudiy.com/entry/2024/07/23/080117" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">techblog.gaudiy.com...
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 - Gaudiy Tech Blog
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 - Gaudiy Tech Blog
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 - Gaudiy Tech Blog
こんにちは。ファンと共に時代を進める、Web3スタートアップ Gaudiy の seya (@sekikazu01)と申します。 この度 Gaudiy では LangSmith を使った評価の体験をいい感じにするライブラリ、langsmith-evaluation-helper を公開しました。 github.com 大まかな機能としては次のように config と、詳細は後で載せますが、LLMを実行する関数 or プロンプトテンプレートと評価を実行する関数を書いて description: Testing evaluations prompt: entry_function: toxic_e…
techblog.gaudiy.com
July 26, 2024 at 6:37 AM
<a href="https://techblog.gaudiy.com/entry/2024/07/23/080117" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">techblog.gaudiy.com...
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 - Gaudiy Tech Blog
【技術選定/OSS編】LLMプロダクト開発にLangSmithを使って評価と実験を効率化した話 - Gaudiy Tech Blog
My wins today for MailWizard
- Switched from LangSmith to Langfuse for LLM monitoring
- Setup was surprisingly smooth
- Love the built-in session tracking
- The scoring system helps us see how useful our email drafts are (when someone copies the draft, we count that as a win!)
- Switched from LangSmith to Langfuse for LLM monitoring
- Setup was surprisingly smooth
- Love the built-in session tracking
- The scoring system helps us see how useful our email drafts are (when someone copies the draft, we count that as a win!)
November 25, 2024 at 8:35 PM
My wins today for MailWizard
- Switched from LangSmith to Langfuse for LLM monitoring
- Setup was surprisingly smooth
- Love the built-in session tracking
- The scoring system helps us see how useful our email drafts are (when someone copies the draft, we count that as a win!)
- Switched from LangSmith to Langfuse for LLM monitoring
- Setup was surprisingly smooth
- Love the built-in session tracking
- The scoring system helps us see how useful our email drafts are (when someone copies the draft, we count that as a win!)
There was Inspect, by the AI Safety Institute github.com/UKGovernment... and there was also LangSmith docs.smith.langchain.com
GitHub - UKGovernmentBEIS/inspect_ai: Inspect: A framework for large language model evaluations
Inspect: A framework for large language model evaluations - UKGovernmentBEIS/inspect_ai
github.com
November 18, 2024 at 2:03 PM
There was Inspect, by the AI Safety Institute github.com/UKGovernment... and there was also LangSmith docs.smith.langchain.com
<a href="https://zenn.dev/atamaplus/articles/917e3acec1c25d" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">zenn.dev/atamaplus/...
langchain/openevalsでLLM-as-a-judgeの基本を理解
- LLM-as-a-judge を openevals で試せる
- 評価結果は True/False ではなく細分化も可能
- LangSmith と統合し評価結果を可視化できる
langchain/openevalsでLLM-as-a-judgeの基本を理解
- LLM-as-a-judge を openevals で試せる
- 評価結果は True/False ではなく細分化も可能
- LangSmith と統合し評価結果を可視化できる
langchain/openevalsでLLM-as-a-judgeの基本を理解
zenn.dev
March 23, 2025 at 11:34 AM
<a href="https://zenn.dev/atamaplus/articles/917e3acec1c25d" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">zenn.dev/atamaplus/...
langchain/openevalsでLLM-as-a-judgeの基本を理解
- LLM-as-a-judge を openevals で試せる
- 評価結果は True/False ではなく細分化も可能
- LangSmith と統合し評価結果を可視化できる
langchain/openevalsでLLM-as-a-judgeの基本を理解
- LLM-as-a-judge を openevals で試せる
- 評価結果は True/False ではなく細分化も可能
- LangSmith と統合し評価結果を可視化できる
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 6:22 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 6:35 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 6:07 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
LangChain vs LangSmith: A Developer-Focused Comparison Discover the key differences between LangChain and LangSmith in this in-depth guide. Compare features, integration options, and benefits. In LLM application development, LangChain and LangSmith have...
| Details | Interest | Feed |
| Details | Interest | Feed |
Origin
blog.promptlayer.com
May 15, 2025 at 7:19 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Link Preview
Visit the link for more information
thehackernews.com
June 17, 2025 at 7:17 PM
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.
Cybersecurity researchers have disclosed a now-patched security flaw in LangChain's LangSmith platform that could be exploited to capture sensitive data, including API keys and user prompts.