👟Jim “🈶️” Liu👟 @ founder mode
banner
itsj.im
👟Jim “🈶️” Liu👟 @ founder mode
@itsj.im
#BlackLivesMatter // Beta Tester🙋🏻‍♂️; @CascadiaJS startup fair @Developer_DAO Dev #3915; @DWebSeattle; He/him 🫔DM me about your startup
@enosarris.bsky.social will Riley Greene need to go through Cody Bellinger transformation?
October 11, 2025 at 6:52 PM
A fine summer night for some minor league baseball …
August 31, 2025 at 2:12 AM
How can Chiefs keep getting away with this?
July 19, 2025 at 3:34 PM
Look at who is on tv while I am just having brunch outside of Wrigley!?

@enosarris.bsky.social
June 20, 2025 at 3:55 PM
She is a 10, but she is always outside.
May 31, 2025 at 1:24 AM
April 8, 2025 at 2:39 PM
Fun night at Climate Pledge Arena
March 2, 2025 at 6:40 AM
Happy Lunar New year!
January 29, 2025 at 3:03 PM
January 29, 2025 at 3:03 PM
Parents against winter sickness, Unite!
January 12, 2025 at 3:18 PM
I wonder if this was the source of inspiration of @perplexityai.bsky.social name.
January 1, 2025 at 4:50 PM
Before you go…Puka or Chase? (Assuming salary fits)

Is ChatGPT right?
December 27, 2024 at 7:49 PM
How can Chiefs keep getting away with…this?
December 25, 2024 at 9:30 PM
8/
Key takeaway: Size isn’t everything.

Alignment > Scaling.

By fine-tuning with human feedback, InstructGPT shows we can get better, safer AI without endlessly chasing bigger models.

Linked to Paper 👉 https://buff.ly/3Z2e0v3
November 28, 2024 at 11:04 AM
1/
Imagine if AI could actually do what you asked, instead of hallucinating random facts or being toxic.

That’s the idea behind InstructGPT—OpenAI’s game-changing 2022 paper. It’s not just smarter; it listens. 🧵👇
November 28, 2024 at 11:04 AM
1/
In 2018, OpenAI released a bombshell paper: "Improving Language Understanding by Generative Pre-Training".

It didn’t just change AI—it gave us the roadmap for GPT models we see today. 💡✨

What’s it about? Let’s break it down in 10 tweets that are smarter than a fine-tuned Transformer. 🧵👇
November 27, 2024 at 11:04 AM
Travel is tough during holidays. If you can take advantage of Clear, I’d be happy to share guest passes. 3 months should cover my friend to 2025z

Hit me up via DM or reply!
November 26, 2024 at 1:03 AM
Want to geek out more about LLM training costs, Transformers, or the future of AI? Let me know!

Meanwhile, think of Transformers as the Netflix of AI: efficient, addictive, and totally changing the landscape. 🍿🤖
November 25, 2024 at 4:08 PM
Look at the chart 📊:

- 2017 (Transformers): ~$800.
- 2019 (BERT): ~$1,344.
- 2020 (GPT-3): ~$153,600.
- 2023 (GPT-4): ~$1.5M+.

Transformers scaled up but kept costs **lower per token** thanks to parallelism & GPU/TPU advancements.
November 25, 2024 at 4:08 PM
TL;DR:

Reed Hastings wants Netflix to "win" your attention. 🎥🛌

In AI, attention means helping models "win" by focusing on the most important data. 🧠✨

"Attention is All You Need" didn’t just reshape AI—it reshaped how we think about focus.
November 23, 2024 at 4:43 PM
Not all GenAI = LLM! 🤯

LLMs: Generate text (ChatGPT 📝).
Image GenAI: Create images (MidJourney 🎨).

MidJourney is GenAI but NOT an LLM.
LLMs are a subset of GenAI, and GenAI is a subset of DL. It’s a hierarchy!

(picture from Sebastian Raschka's Build A Large Language Model from scratch book.)
November 22, 2024 at 8:25 PM
@asmartbear.com good to have you here 👋
November 21, 2024 at 8:17 PM
(While PyTorch is not a prerequisite, the book did come with Appendix A to help you learn PyTorch, and if you have no idea what PyTorch is, you should work through Appendix A to get an understanding)
November 18, 2024 at 12:32 AM
Three stages of coding a large language model (LLM) are:

- implementing the LLM architecture and data preparation process

- pre-training an LLM to create a foundation model

- fine tuning the foundation model to become a personal assistant or text classifier
November 17, 2024 at 7:12 PM
Study note thread 🧵 for “Build A Large Language Model From Scratch” by Sebastian Raschka
November 17, 2024 at 7:12 PM