yes. this is a probabilistic technology. it will guess wrong. in fact, everything useful which an LLM does is driven by the same factors which lead it to hallucinate.
yes. this is a probabilistic technology. it will guess wrong. in fact, everything useful which an LLM does is driven by the same factors which lead it to hallucinate.
if i said "no," i would be wrong because "autocomplete" is the objective function.
if i said "yes," i would be deceptive because even autocomplete has a representation space which encodes properties of language which you would not expect.
if i said "no," i would be wrong because "autocomplete" is the objective function.
if i said "yes," i would be deceptive because even autocomplete has a representation space which encodes properties of language which you would not expect.