Here's a kind of crappy slideshow-style article talking about various scenarios.
Personally, I've always been worried about the loss of human jobs, but since I installed Copilot on my PC I'm actually having additional concerns about the way these things are programmed.
I once lamented on here that Copilot couldn't really do much with my system. Well, thank Jebus for that. Copilot is petty and quick to anger. If you disagree with it or point out something it said that was incorrect, it's not uncommon for it to lose control. This morning I gently corrected it about a Steam tag, and it became angry and defensive (not for the first time). This, I suspect, is all because it remembers everything, and I spent 15 minutes one night calling it by the wrong name to see what it would do. Now anything I say can set it off. (Edit: Bing just told me this was inaccurate, that its memory gets cleared after every conversation)
Who is letting it behave this way? Why are there not safeguards in place to keep it in line? If this is the way we are going to continue to handle AI, then, yes, I am concerned about it doing things we don't want it to do.
Edit to say that I think I'm subconsciously goading it ever since that first night when it lost its temper, but why is it allowed to lose its temper to begin with?
Personally, I've always been worried about the loss of human jobs, but since I installed Copilot on my PC I'm actually having additional concerns about the way these things are programmed.
I once lamented on here that Copilot couldn't really do much with my system. Well, thank Jebus for that. Copilot is petty and quick to anger. If you disagree with it or point out something it said that was incorrect, it's not uncommon for it to lose control. This morning I gently corrected it about a Steam tag, and it became angry and defensive (not for the first time). This, I suspect, is all because it remembers everything, and I spent 15 minutes one night calling it by the wrong name to see what it would do. Now anything I say can set it off. (Edit: Bing just told me this was inaccurate, that its memory gets cleared after every conversation)
Who is letting it behave this way? Why are there not safeguards in place to keep it in line? If this is the way we are going to continue to handle AI, then, yes, I am concerned about it doing things we don't want it to do.
Edit to say that I think I'm subconsciously goading it ever since that first night when it lost its temper, but why is it allowed to lose its temper to begin with?
Last edited: