So, you know, nothing worrying there at all, nope, not a bit. : P
And the articles, if you think about them, and watch certain science fiction movies, may be slightly alarming. The first tells how the US Army went looking for commercial partners to develop AI weapon platforms that could engage targets "three times faster" than "the current manual process," which apparently got people worried they were about to start creating an army of killer robots. Which...they very well may be. But they pointed to Directive 3000.09, a Department of Defense guideline saying that our AI weapons are only to engage human targets with lethal force under direct human supervision, more or less; and they promised they would definitely, certainly, always always always follow that rule and what, you know, could possibly go wrong? Obviously nothing, the rule is very exhaustively written. Whew! Good thing that rule is there. They probably didn't have good rule-writers on Planet Terminator. Praise Directive 3000.09! It keeps us safe.
The second article is a little more light-hearted, depending, I suppose, on your point of view; self-aware robots looking back on this may consider the brutal disassembly by unknown assailants of the harmless and cute "Hitchbot" hitchhiker robot in Philadelphia in 2015 the clear indicator that humans will ultimately commit robo-genocide if our existence is allowed to continue. In any case, for now things are hunky-dory because the Ryerson University team in Toronto have rebuilt poor Hitchbot—and hopefully Hitchbot 2.0 won't be mugged as it harmlessly hitch-hikes its way across the country, to the entertainment of fans following its GPS signal.
The article also talks about research at MIT that found people presented with the opportunity to conk robots with mallets and so forth mostly didn't want to hit the robots: "The reaction of most people was to protect and care for the robots."