AI posts on this website

How tolerant this forum for AI posts? Suppose, a post was written, edited or inflated with help of AI. Should it immediately be reported or tolerated unless very bad quality?

I can’t speak for everyone, but I personally would prefer to interact with humans here, not with any AI.

When someone posts on this forum, they are essentially asking me—and everyone else here—to allocate some time and energy and focus toward considering and discussing what they wrote.

Asking others to put in that effort, while being unwilling to do the same oneself and instead resorting to copying the words of a chatbot, is unreasonable.

The only possible exception I can think of is the use of AI translation tools for non-native English speakers to convert what they’ve written in their native language, to English for ease of communication. But that’s not using an AI to write posts, just to translate what’s already written.

45 Likes

"AI posts" is kind of a broad category. For me it would include everything from translated posts to posts from 'bots' that autonomously generate/reply posts.

I sometimes use GPT to improve my writing for posts I find especially important. Typically it takes out superfluous words or condenses some of the writing. Make it more to the point so to speak. I would consider that a valid use case as well.

Another use case I would love to hear your thoughts on is a forum bot that answers "how do I?" questions typically asked in the 'Using Swift' category. We use something like this in our Slack support channels that uses our company Confluence as reference. And this actually works pretty well. The bot could leverage the wealth of knowledge that is already shared on this forum, as well as use the official documentation. Clearly indicating that it's an AI generated reply.

1 Like

I have mixed feelings on broader usage, but please let’s not do the bot thing. Thus far, we’ve had quite a few posts that go “I tried to do X, MyAI suggested Y, and it didn’t work” and then the thread has to untangle what MyAI said in order to reach Z. And to be clear, I don’t think such a question should leave out the middle part! It’s important to say what you’ve already tried, and it’s somewhat relevant what the source was because it skips a possible round of “why did you try Y, obviously that won’t work” (not the most polite response, of course, but not an uncommon one on the internet at large).

Now, this is no evidence of how good AI is at answering questions. If the AI is usually very good, we would never see the question posted in the first place. But it does mean we can see the cases where the AI fails its questioner.

So in this hypothetical bot scenario, having an AI make posts in response to questions here would end up feeding Cunningham’s Law. Let’s say the AI gets the right answer 29/30 times, and in the 30th time it needs to be corrected. Normally, when person B posts an incorrect or incomplete response to person A, ensuing discussion can correct the mistake and both A and B learn something, and can answer the question in the future. However, if OurAI answers incorrectly and is corrected…it will keep answering incorrectly in the future, because today’s AIs are not stateful. And unlike a person, my sense is that today’s AIs are bad at knowing when they don’t know the answer to a question and should maybe wait for someone else to respond.

There’s also a more subtle point, which is that one of the ways people get invested in a community and improve their own understanding is by finding questions they can answer, and doing so. Sometimes this also ends up invoking Cunningham’s Law, because (a) if you’re still relatively new yourself, you might not have the right answer after all, and (b) even the experts get this wrong, or their information is out of date. (I’m teasing myself here, mostly, having been gone from Apple for nearly six years but still hanging around the forum.) But really, when someone posts a question, getting the questioned answered might be the primary goal, but it is not the only goal for the forum as a whole.

If you want to ask an AI a question, go ask an AI a question. Forums should be for talking to other people.

25 Likes

I agree with everything said here, but I should add that there is a real difference between using a translator and asking an AI to handle the cognitive load for you. Google Translate has been around for decades, and only started using ML recently (relatively speaking).

2 Likes

I guess this question per se was answered on another thread:


By the way, to what extent can we perceive AI's posts as AI's ones? :sweat_smile:

:waving_hand: Hello @kelin

:technologist: There is no strict rule on the forum that forbids using AI. I don’t think using AI is wrong — it can be very helpful — but I don’t support replies that are completely generated by AI.

People post on forums to get answers from real humans with real experience. If everything is just handed over to AI, we might as well ask AI directly, and the forum loses its value as a place for human-to-human discussion. Also, AI can be inaccurate or not rigorous enough, which may mislead others if copied without understanding.

:robot: The ideal way, in my opinion, is human first, AI as support. For example:

  • Write your own answer based on your knowledge and experience
  • Then use AI to polish the wording, translate it into English, or improve clarity

:mechanic: That’s also how I use AI: I answer first myself, then let AI translate and refine my text. In this way, AI enhances human input instead of replacing it.

:grinning_face:
Jiaxu Li
Member, Swift C++ Interop Workgroup
Swift Team

6 Likes

:green_heart:

1 Like

-1 from me, aside from spelling / grammar corrections or minor changes like that. Recent example of what I don't like to see here: obviously AI, misleading and missing a point by two hundred miles.

4 Likes

I would suggest that if you do use any Machine Learning System that you use it more as a tool to check if you may need any clarification or additional context however you should never treat content generated from Machine Learning Systems[1] as content you should post or use and instead should search through the information from various sources like the relevant documentation, articles and other relevant locations like this forum. Machine Learning is a tool not a “creator” and should be treated as such, translation and proofreading[2] would be as far as I would go especially here and other sites with technical content which if imprecisely specified can lead to issues with the utility of the information for the audience you want to address.

If you don’t understand why something works you shouldn’t brute force it with a fancy “magic” calculator but instead try to determine the issue so that you gain information which can help you improve your ability to sightread and correct code faster. If you don’t know why something works you can’t accurately or with precision detail the particulars of the concept or provide any indications about what those particulars may be.

TLDR:
Machine Learning is a tool and should be used as such(ie. Translation and proofreading) not for editing and creating. Any form of Machine Learning is not a competent or acceptable replacement for human knowledge and experience and should be used in ways to mitigate any critical information loss and hallucinations it may perform.


  1. Excluding any accessibility supports ↩︎

  2. All editing should be human generated but hints for these may come from a ML System ↩︎

What about if you found content that the Large Language Model generates that you then validated using external sources such as documentation etc.? That would be a good use of AI in my opinion.

The people in this forum are very kind and helpful. Strive for clarity and think before you post, but don't be overly afraid of negative reactions if you're not immediately clear or have made a mistake. People will tell you in a friendly way. So, besides a good translation tool, I just don't see the need for AI. The risk is greater that the AI ​​will produce nonsense.

Edit: Should not be an anwer to MinerMinerMain, but an answer to the original post. … See, I made an error. Not good, but I guess nobody will hate me for that :wink:

1 Like

I would say that you should use its information as a pointer but you should still be able to describe any new concept within your own words before discussing it or bringing up it up as a new concept in the discussion. If you can’t talk about it without the assistance of a ML model then you should not explain the topic and should find someone who is more capable of discussing the particular content.

2 Likes

I also think that if you are going to use AI and post content that you found there, that it would be helpful for other people to know that you did when you post on the forums, and not just to proclaim the knowledge as your own.

@kelin I wouldn’t go so far as to accuse someone of posting AI-generated content simply because they are posting long answers on the forums. In the past, Robert Ryan has answered some of my posts, and I haven’t taken any issue with them. I read the example reply you posted, and again I personally don’t see any problem with it.

1 Like

Yep, this is disrespectful, if not outright offensive. We are not kids in a kindergarten.

1 Like

Isn't this thread just a witch hunt with extra steps?

Please let's keep these discussions technical and respect each other. Calling someone a witch-hunter does not help keep the conversations healthy.

The original poster said

and I fully agree. This is a call to a more healthy conversation, and not a blame.

In defence of @robert.ryan , I wish to say a few words.

Actually, he didn’t copy your solution, he merely corrected it. Please note that he not only demultiplexed the original sequence, he even put in a termination handler. Those things were not in your code.

3 Likes

I must take a step and stop all argument about single example that I brought. @oscbyspro, @ibex, @dima_kozhinov - I don't know Robert, I'm not focusing on his persona, I just found myself offended and the whole experience unpleasant. And I believe the reason for that is AI usage, not someone's personal qualities.

1 Like