:: [DNG] [OT] AI as a precursor to ask…
Forside
Slet denne besked
Besvar denne besked
Skribent: Syeed Ali
Dato:  
Til: dng
Emne: [DNG] [OT] AI as a precursor to asking intermediate questions (was: Request for information - - re: networking)
I'm not sure what the convention is, but instead of spitting out
multiple little emails I've combined them. Is this okay?




On Tue, 06 Jun 2023 18:40:05 +0000
zeitgeisteater via Dng <dng@???> wrote:

> My hypothesis is: AI is a "consensus engine".


This is a wonderful idea that I'm going to use.

Without any natural intelligence behind it, a large language model is
just truthiness [1].


> Things like ChatGPT are training Microsoft's data sets for free.


This is false according to them, but, like you said for privacy, it's
just a promise.




On Tue, 6 Jun 2023 20:49:30 +0200
Didier Kryn <kryn@???> wrote:

> AI is heavily trained by humans; therefore there's a chance the
> virus of human stupidity has crossed the species barrier already.


The "largeness" of trolling through random human chatter isn't the only
thing poisoning a theoretical purity of artificial intelligence, it's
that it's been specifically ideologically manipulated before it gets to
a user. There are other things, but there is the open notion of the
use of the AI not "harming". That was an interesting rabbit hole to
discuss with ChatGPT.

I want to be able to understand something, read the manual and safety
labels, then with my informed consent do whatever I want; and that
includes output from an AI.




On Wed, 7 Jun 2023 02:34:50 -0400
Steve Litt <slitt@???> wrote:

> It has crossed the species barrier. These AI employee applicant
> systems have racial bias programmed into them, and maybe not even
> intentionally.


It could be both, but the developers confirmed human intervention
before it's current iterations, even in the early beta. (Although I
don't know offhand where to cite a source for this.)



On Wed, 7 Jun 2023 22:28:13 +0100
Simon <linux@???> wrote:

> https://www.lightbluetouchpaper.org/2023/06/06/will-gpt-models-choke-on-their-own-exhaust/
> Summary: The models are currently trained on human written text. As
> time goes on, inevitably some of the input material will be AI
> generated. And the result will be that over time the models get worse
> and worse as each iteration incorporates the rubbish from the
> previous ones.


I once heard the concept of "intellectual incest", where a university
would hire one of its students to become a professor. The problems are
described like actual inbreeding, where no outside challenges to the
education, or improvements through life experience, make it back into
the university, stagnating that knowledge.


----


[1] https://en.wikipedia.org/wiki/Truthiness