I must take a step and stop all argument about single example that I brought. @oscbyspro, @ibex, @dima_kozhinov - I don't know Robert, I'm not focusing on his persona, I just found myself offended and the whole experience unpleasant. And I believe the reason for that is AI usage, not someone's personal qualities.
These forums do regularly get posts from LLM bots, and I try to weed them out as best I can. Usually theyâre pretty obvious and low-value, but there have been cases where Iâve left borderline posts up out of caution and then later become convinced. That will undoubtedly continue forever, because LLMs are likely to only get more and more credible. I will continue to try my best to err on the side of not accusing a real person of being a bot, because that is the right thing to do; the alternative is unacceptable. I feel like itâs been working well enough; the forums do not seem to be overrun with LLM content.
We do not currently have an explicit policy against using AI here. I remove LLM slop posts as part of a general policy against spam; most of them are clearly advertising or SEO hacks. Someone who merely uses an AI as part of their process of understanding a problem isnât crossing any lines that I can imagine drawing. (And I donât see any compelling evidence that thatâs even happened here.)
In contrast, this kind of âAI investigationâ where one community member publicly accuses another of being a bot is crossing a line and is not acceptable. If you have concerns, you can flag a post or send a private message to the @moderators group, and then you can just ignore that account and let us take care of it. Please do not flame your fellow posters and call them AIs just because you felt condescended to.
@asaadjaber â Thanks for saying that.
I was attempting to correct some mistakes/misunderstandings and provide helpful links and offer other suggestions, and to do all of that as respectfully as possible, but clearly I failed. Thatâs on me. Offense was taken where none was intended.
The OP messaged me to take it down, so out of respect, I have.
For the record, my post (which triggered this thread) was not LLM generated. It was all me.
For giggles and grins, I ran it through gptzero.com and it agreed: It reported, âWe are highly confident this text is entirely human.â It analyzed the post and concluded that it was 99% human, 0% AI, and 1% âotherâ (whatever âotherâ means; likely the markdown syntax, but I donât know). Anyway, it was not LLM content.
Iâm just waiting for the revolution to come, when AI becomes sentient. Then how is humanity going to explain the discrimination against them (and all the âclick to prove youâre not a robotâ captchas).
Some of my best friends are AI!
(Just throwing that in there to use as a defence for the future)
![]()
I would warn from implementing something like auto-answer bot on this forum. I have two arguments against:
- Imaginge what will happen with time - forum flooded with "how do I?" questions and robotic answers. It's a low quality content that nobody will read. People, other then those who asked the question, would not participate.
- Why someone need to go to the forum to ask AI a question, if they can directly talk to AI?
In Slack that may work since it's a chat and getting quick response there is more important then creating long detailed discussions.
My personal opinion: if others can tell your post was written by an LLM, you probably shouldnât have posted it. If you must, post your input to the LLM, but donât force other people to read the output of your LLM. If people wanted to read LLM output, they would have prompted their favorite LLM for some.
While LLMs do get more human-like, AI detector tools like gptzero.com actually seem to do a very good job telling apart robotic writing of a real human from an LLM, especially on posts beyond a few sentences. So when in doubt, you can always paste someoneâs post into an AI detector, and if it comes out likely to be an LLM, feel free to act accordingly. Personally, I ignore such posts, I choose not to waste my time reading other peopleâs LLM output.
But I suspect every community member will need to arrive at their own treatment of LLM output, and while we shouldnât ban LLM posts outright, we also should let people outright refuse to engage with LLM output posted by others, and let people refuse to engage with others who frequently post LLM output. Hopefully thatâll incentivize behavior that results in this remaining a place where humans interact with other humans.
This makes a lot of sense to me. ![]()
I donât think that AI generated content on this forum is frequent enough to warrant any major action. This site is not popular enough. On stack overflow people post AI answers to put their profile in the resume, the problem there is MASSIVE. This forum does not have a âreputationâ system, making it unattractive for this purpose. At least for now I would always assume "human" by default.
Btw. As far as I know there is also an auto-moderator, or at least there used to be. If the post hits certain keywords it will have to be pre-approved by humans. This should save us from the AI generated ads (at least to some degree), and lessen the burden on human moderators.
AI shaming
One thing that I do not want to see on this (or any other) forum are AI accusations. Exactly the ones we ended up having in this thread. Swift logo may be orange, but lets not try to imitate the other orange sites (HN/reddit).
Somebody used AI to write their post? Cool. There is no need to shame them. I would also be against using any form of auto-detectors. I say this as a person who never uses AI for anything because of ⌠sooo many reasons. (And also because I vomit when I see the fake politeness of those chat bots.)
Also, letâs be wary that some people like to write longer posts, so not all of the longer content is AI generated. It would be unpleasant if somebody poured a lot of effort into answering the question, just to be accused of using an AI.
Swift evolution - summary of previous discussions
As much as I hate the AI ekhm... everything, I think that there is a use case for it: when people propose a new functionality (via Swift evolution) the Swift maintainers usually respond:
This was proposed before and was rejected. Use search.
Ideally they would provide a summary of the previous talks, but writing that takes time.
AI can generate the text, and the maintainer would review and modify it. There is a big chance that they participated in the previous talks, so they already have the answer âin their headâ. This just skips the time consuming âtypingâ part.
The end result would be as detailed as swift-evolution/commonly_proposed.md, and definitely much better than âgo searchâ. Btw. last time the "commonly_proposed.md" was updated was 2023-01-21, and I'm sure we can add some things there. There were tons of async/await utils that were discussed over and over again.
Shame is ethical category and it depends on the context. If the post was completely written by AI but the author pretends it's his own - it is morally shameful. It's like asking mom to do your homework and then pretend you did it yourself: falls in the category of lie. If the author discovered his AI usage and there was no intention to deceit someone - it's not shameful.
You know emotionally I hate to read posts written by AI, for example in my Linkedin feed. I almost feel rage.
Also, I personally don't find orange color offensive, what did you try to say by that? Reddit-hushing? ![]()
Having people believe that a human created something (a forum post, or any other creative product) while it was crated by AI, without any form of disclosure, is deceitful and unethical, whether the âcreatorâ realizes it or not. That being said, if thereâs no malicious intent, âAI shamingâ would also be wrong.
The ideal path forward, to me, is simply full disclosure, with 4 categories:
- no AI used at all, nice, nothing to disclose then, weâre dealing with human-created output, with all its strengths and flaws;
- AI used for mechanical tasks, like translation or spelling and grammar correction â disclosure: âtranslated by AIâ, âcorrected by AIâ
- AI used for clean up, rewording, summarizing, expanding⌠that is, the content was
âmassagedâ by AI, starting from a human-generated creation, not just a prompt â disclosure: âpartially generated by AIâ, âimproved by AIâ - AI used to produce the full output, the human only provided a prompt â disclosure: âgenerated by AIâ