Aster
banner
therebelrobot.bsky.social
Aster
@therebelrobot.bsky.social
🏳️‍⚧️🏳️‍🌈🇵🇸
she/her
gay stuff
nerdy stuff
*edit: "one more question", not "note", I really shouldn't type at 2am 😅
April 15, 2025 at 2:22 PM
Oh! Sorry, one note question and i'll leave ya be: Weighting in the problem boards, did you envision those to be socially determined by existing users, based on possibly topic/tag relevance? Or were those weights potentially seeded by academic or professional certifications? or both?
April 15, 2025 at 5:23 AM
(side note: by "inaccessible", i mean they aren't conscious and have access to interact with the problem boards as agents like humans can; not that they aren't physically accessible via sensors or human interaction)
April 15, 2025 at 5:16 AM
The reason i'm asking these right now is because I firmly believe we have accessible tech that can start building an application framework that follows your model for the Dandelion Networks, if lower fidelity, and I would love to get a prototype drafted and deployed and in use on a micro scale.
April 15, 2025 at 5:12 AM
Lastly: if i'm reading this right, the algorithms aren't just stand-in advocates for inaccessible resources (e.g. an agent for a river), but for entire values (e.g. an agent for the value that preserving the river is worthwhile). Is that an accurate read? Is there more nuance i'm possibly missing?
April 15, 2025 at 5:10 AM
Another question: the problem boards of the Dandelion network; besides the algorithms, were they based on a specific existing application? They seem Reddit-like with the threading and voting, but there's a weighting system that seems a bit different, i'd like to try and find an existing analog.
April 15, 2025 at 5:06 AM
So first question: do you have any follow up resources I could take a look at for modeling that "values-based crowdwork session" with the Ringers? That seemed specific enough a concept that there's bound to be more for me to dig in there.
April 15, 2025 at 5:02 AM
these are hecka rad! do you post the designs/code anywhere? i'd love to try my hand at building one.
April 7, 2025 at 12:02 PM
March 30, 2025 at 12:44 PM
>> less individually risky and more collectively informed.
March 28, 2025 at 4:15 PM
> pros/cons, and none of them could be considered purely ethical, so a lot of individual judgement and collective trust is needed to navigate it, but that's one of the really cool ideas behind the dandelion network in the first place: democratize those decisions can make them >>
March 28, 2025 at 4:14 PM
"up to us" in this context, is whatever sphere's of influence we have access to. That could mean volunteer labor, if we have the access/energy/ability/privilege; B-corps are an exciting way of leveraging the coporate paradigms in more humane ways, as are non-profits and NGOs. Each have their >
March 28, 2025 at 4:12 PM
> curation, generation, etc., that glue is just bluster and theory. We can, and should, be valuing and compensating contributors to LLM projects equally: data generating, data processing, infrastructure, ideating, because each are essential blocks to the project's success.
March 28, 2025 at 4:10 PM
Oh absolutely. That's the unspoken part of all of this: the devaluation of the data inputs as valid partners in dataset management. Programmers, admins, etc. are the glue that can help connect dots, but without those original dots, without the original data collection, management, cleanup, >
March 28, 2025 at 4:04 PM
>>>>> funded their research/data collection/etc. The tools are just now more readily available to people who don't have systemic privilege/ venture capital behind them to leverage. It's up to us to use those tools, and build more, ethically.
March 28, 2025 at 12:05 PM
>>>> Of note here, though: LLMs existed long before Chat GPT made it cheap/popular, they were just very niche use: environmental datasets, training models for autonomous driving, language translation models, etc.; but they were private, proprietary models only available to corporations who >>>>>
March 28, 2025 at 12:03 PM
>>> that drum for decades), and demanding data transparency from the models we choose to engage with (which honestly is where education and accessibility of the topic are critical, otherwise you get arguments calling for complete abandonment of the system rather than a nuanced approach). >>>>
March 28, 2025 at 12:00 PM
>> data collection paradigm, or at least it aims to. Some things that we can push for to help foster that ideal: platforms that can allow creators to explicitly opt-in to specific training models (i believe some of these already exist), licensing standardization (open source software has been>>>
March 28, 2025 at 11:55 AM
> but that doesn't mean they *can't and *don't* exist. Ethical data training requires collecting explicit consent, clear attribution and revocability, and transparency of usage; which necessarily means they're smaller, less diverse, and more costly to collect. Wikipedia, for example, follows this >>
March 28, 2025 at 11:51 AM
Oh no! Large Language Modelling is just a type of ML structuring, and doesn't necessitate a specific source for its data set. It's true there's a higher administrative cost to ethically sourced sets, which is why the more popular (read: cheaper) models are coming under fire, >
March 28, 2025 at 11:48 AM