I’m seeing some users who are starting to use AI to generate low-quality answers to questions. The answers are either incorrect or low-value (lacking much new content, demonstrating poor programming practices).
Users found to be using AI like this should be permanently banned from all community resources (Compass, this Developer Community, etc.). Hopefully that will put them out of a job (hard to be a SailPoint developer without access to the community or Compass) so they don’t keep doing harm to their employer and the rest of the community. With the consequences being so severe, it will hopefully actually be effective at stopping people from posting junk like this.
We understand your frustration with the increase in AI-generated content. Ensuring the accuracy and value of shared knowledge in the community is a top priority for us. We agree with you that misinformation or low-quality AI answers can make it harder for the community, especially when we are talking about an intricate subject like SailPoint development.
Both the Developer Relations and Compass teams are actively aware of this issue and we are taking it very seriously. We’re continuing to refine our approach to moderation and community standards to better address these kinds of posts. While permanent bans are sometimes necessary, our focus is on fair, thoughtful actions that protect the community while giving contributors a chance to improve when possible.
Thanks again for your vigilance, and please continue to flag any content that seems off! Community input is critical in helping us maintain a high standard.
Flag the posts you’re talking about and the devrel team will address them. I haven’t seen any bans that I know of but some people may have had short vacations and/or stripped of their status.
They can’t realistically watch everything so they rely on us to report that when we see it.
@mcheek I’ve flagged several posts now. Still waiting on a response for the latest ones, but the first time I flagged something, it got a response pretty quickly, so I’m guessing wait times are just variable depending on the moderation team workload, as you say.
@darylclaude_medina I somewhat understand the desire to be fair and thoughtful. I think it’s important to recognize, however, that the people who do this aren’t just harming the community. They’re also harming their employers and those who rely on their “expertise”. They’re also exhibiting dishonesty that brings into question their likelihood to ever improve.
Novice developers may be led astray, resulting in more production defects. In some settings, such as healthcare, defects can literally and without exaggeration cost lives.
More experienced developers have to sift through the nose to find the signal. Incorrect AI-generated answers hurt the usefulness of the community as a whole by cluttering it with what is effectively spam. This is likely to affect search and make it harder to find relevant answers to questions.
Again, they aren’t just harming the community. They’re also harming their employer. If they’re using unchecked AI answers here, they’re probably using it in their day jobs.
I do think there should be an appeal process. If the users can demonstrate that they understand the problem and simply made a mistake or missed a mistake by the AI but otherwise did their due diligence, great! If they can’t, then given the level of harm they can do, I don’t think they deserve a chance to do further harm.
Lastly, I feel the severity of the punishment is necessary to act as a deterrent. I can’t imagine much else being enough of a consequence that it outweighs the ease of doing this.
Hello everyone, thank you so much for your thoughts on this topic. We’ve definitely seen an influx of low effort AI generated answers here in the forum. Please know that we do review every flagged post and as @mcheek said, we can’t watch everything that comes in so please keep flagging posts you see! We have also taken action and removed a few people from the Ambassador program for negligent AI use.
The community teams are currently talking about how to improve our AI policy going forward and hope to have it rolled out in the new year. I like the idea of an appeals process, @kjperkin, thank you for suggesting it!
Removing people from the Ambassador program just doesn’t feel strong enough to me. My biggest concern does not feel like it’s being addressed: SailPoint IdentityIQ is used in environments where mistakes can have serious consequences. Consider that in a healthcare setting, delays or issues with getting access to systems can literally cost lives. Lazy AI use puts people at risk of real financial, reputational, or even physical harm. It’s inexcusable behavior, and the consequences should reflect that.
Another thing I wonder about is how other users will be warned that a user has abused AI in the past. At a minimum, I think that users who have been found to be abusing AI and either have not appealed or have failed their appeal should have some marker on their replies indicating that they are believed to have used AI for answers in the past and that their answers are thus suspect. The marker should be permanent for their past posts going back to when Gen-AI became common and last for at least a year on new posts.
I definitely see your point about the potential real world consequences of inappropriate AI use. I will take this back to the team and we will use it in consideration with other concerns raised when drafting our new AI policy for this forum.
Thanks again for sharing your thoughts, I truly appreciate you taking time to do so.
Danielle