https://joeytan.dev
https://github.com/jtan2231/agora
https://github.com/jtan2231/vizier
but using this still requires a pretty clear understanding of what and where everything is, in addition to the scope of what "everything" is itself
but using this still requires a pretty clear understanding of what and where everything is, in addition to the scope of what "everything" is itself
e.g., i've been having a good time with spec-ing + implementing in parallel with github.com/jtan2231/viz...
but this is primarily for delegation w/optional oversight, and heavily dogfooded on a 20k line repo
e.g., i've been having a good time with spec-ing + implementing in parallel with github.com/jtan2231/viz...
but this is primarily for delegation w/optional oversight, and heavily dogfooded on a 20k line repo
if there was something like true verification (0 human interference) that's when i think things would be more interesting. otherwise in mixed/ambiguous spaces online i just thing everything's fake
if there was something like true verification (0 human interference) that's when i think things would be more interesting. otherwise in mixed/ambiguous spaces online i just thing everything's fake
stet.ink is a better example of something entirely llm-generated, with similar criteria
but frankly this just seems like a provocative misunderstanding of how actually useful LLMs are
stet.ink is a better example of something entirely llm-generated, with similar criteria
but frankly this just seems like a provocative misunderstanding of how actually useful LLMs are
It’s been dogfooded for a while now. My prompts at this point are either “how do we currently do X?” or “what would Y look like in this repo?”
Followed by drafting + implementation by the project itself
It’s been dogfooded for a while now. My prompts at this point are either “how do we currently do X?” or “what would Y look like in this repo?”
Followed by drafting + implementation by the project itself
i think it's some visceral heuristic of something like effort expenditure + tension + success = dopamine + trust or whatever
obviously inefficient but my dopamine centers don't seem to care too much
i think it's some visceral heuristic of something like effort expenditure + tension + success = dopamine + trust or whatever
obviously inefficient but my dopamine centers don't seem to care too much