Jump to content

Mojo Hand

Legacy Members
  • Posts

    5162
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Mojo Hand

  1. It's not much of a difference when you consider that the current surveillance state isn't version 1.0, and version 1.0 of this program is not, in itself, problematic. I guess we should just let it happen and worry about what it could be used for after it's too late. I do have a problem with the broader surveillance state, but there's nothing you or I can do to dismantle it. Snowden released what he released and the public yawned. Neither party does anything of significance. All we can do is speak up every time and maybe avoid helping it grow. By thought analysis, I mean analyzing what people are thinking based on the data collected. The stated purpose of the program is to "identify changes in mental status that could make an individual more prone to violent behavior." If data captured from home assistants is useful in that endeavor, it's by analyzing whether the person is engaging in violent thinking based on things like web searches and who knows what else an Echo hears. Sure, excluding home assistants and other things that capture private communications would largely solve my problem with the program. I don't think we have control of the parameters here, though, do we? And I don't know why you would find it strange that I haven't raised similar concerns in connection with government health care, given that I've already said at least twice why I view this kind of information as more dangerous. No doubt, we need strong privacy protections to protect health data, and there are plenty of advocates fighting for that already. But I see anything designed to listen in on people's smart devices and analyze the data captured as far more ripe for abuse and societal harm. Particularly when it can easily be deployed under the justification of "stopping crime." Maybe it's just that I lack imagination, as you've said, but it's an opinion I hold honestly.
  2. I don't really care. He funded far right social conservatives. That he may have done so simply for personal gain without even believing in social conservativism makes him a worse person, not better.
  3. As we all predicted, they have Trump's balls in a vice. He's up for re-election next year. Xi is president for life with total power to keep his people in line. They can handle temporary pain and deal with the next president. He can't survive an economic downturn. They've been stringing him along with this fentanyl deal and almost agreeing to a trade deal and then backing out, and he's too stupid to realize it. He actually tweets out "We had an agreement but China backed out!" and "China hasn't stopped fentanyl like the promised!" to show how mean they are, when all it does is show what a moron he is
  4. Sometimes fabric can't be repaired. Sometimes it has to be replaced with new fabric.
  5. That's not fair. Some people voting for Republicans will be quite smart, but just pure evil.
  6. He wasn't wrong per se — he just left out "for autocratic nations like China."
  7. I don't know enough about it to draw that conclusion, but the absence of the collection of personal communications data from your summary makes it seem less problematic to me. Just as with the cholesterol example I mentioned earlier, there are significant privacy considerations involved, but nowhere near the level of a program that uses AI to analyze data captured on someone's Amazon Echo. The particular features of HARPA that bother me are that they are analyzing thoughts and using data captured by personal smart devices. Does the Sentinel program include a database of psychiatrist notes among those medical records and an AI system to sift through those notes for certain markers? That would be the kind of thing that would raise similar concerns to me.
  8. Maybe. And maybe you would hamstring our ability to stop the next 9/11 by opposing some of the programs Snowden exposed, simply because of some boogeyman using the info in a way that hasn't yet really materialized. That's literally the argument for our government surveillance programs and why nobody from either party has done much about it. There's always some compelling justification. But every society has mentally disturbed people. Every society has violent people. Every western country has video games and everything else we have. Only we have a mass shooting problem of this magnitude. I don't see why developing a robust understanding of what drives mass shooters is worth handing the government a tool to create mental profiles from data captured from personal electronics. The bigger problem is the availability of weapons that enable a single person to inflict so much damage.
  9. Yeah, maybe. I still think he never intended on winning and that his repugnant personality is just, by a quirk of fate, perfect for the deplorable people that comprised the GOP by the time he came along. I think his whole presidency is a real life The Producers, only his audience doesn't think the Hitler thing is a satire, they actually love it for real.
  10. True. It's definitely possible that an advisor is shorting stocks and telling him, "you aren't gonna let Xi and Powell get away with that, are you, sir? I still think Trump's too stupid to plan it out, though.
  11. While I certainly don't think he's above stock manipulation, I don't think he's smart enough to pull it off. I think he's really just sitting in front of a TV with his phone, totally unhinged, reacting without forethought. Maybe he coordinates some of his tweets for personal stock gain, but I don't think this is that. He really is severely mentally ill.
  12. I know what machine learning is. The problem I have is applying it in the specific way proposed, rather than the kind of statistical analysis in my example. There's plenty of gun violence research that can be done, even using AI, that doesn't involve harvesting information from personal devices to determine a person's thoughts and intentions. Yes, I'd still have a problem with this if gun violence were carved out. My concern isn't with the policy need driving the research, but with the particular tool that would be developed, its dataset, the means of capturing that data, and that it's the government doing it. Yes, the IC could develop something like this on its own, in theory. That doesn't justify doing it openly for them, or publicly blessing the project for cooperation by research minds and tech companies, or for a public that might want to deploy something like this as a response to a crime wave. If it comes down to it, I'd rather have an IC system that the public doesn't know about but would be horrified by than an equivalent system that the public has already tacitly accepted. Yes, leveraging the wealth of information captured by our personal electronic devices is all the rage. That only heightens the importance of considering the implications and unintended consequences. Your position is like someone saying in 2009 that online data-based voter targeting is no big deal because Obama's campaign and many others do it harmlessly. Yeah, then Cambridge Analytica had some ideas about that, and now we're rethinking everything after the cat is out of the bag. Only, it's even more serious when the government has the tools and the data and the policy need to address.
  13. Statistical analysis, for one thing. The kinds of things that got shut down at CDC a couple decades ago were studies on the increased risk of gun violence in homes that had a gun and the like. That's perfectly fine. Mental health can be a component of those kinds of analysis. For example, how many gun crimes were caused by people with a history of diagnosed mental illness. That kind of data is perfect for crafting legislative responses to the gun violence problem. Would fewer guns help? That can be informed by data. Would mental health funding help? Same. The question I don't think the government should be looking into is "Can we develop software algorithms to identify future shooters, who haven't yet done anything, from the information pulled from their smart devices." It's too easy to deploy that information for dangerous purposes, and not entirely clear what a legitimate purpose would be. What positive do you think can come out of it? Per the document, the proposal is to develop "Advanced analytical tools based on artificial intelligence and machine learning" that can "be applied to the data." The data is information collected from "Apple Watches, Fitbits, Amazon Echo and Google Home," along with data "collected by health-care provides like fMRIs, tractography and image analysis." And the end goal is "to develop a 'sensor suite' using advanced artificial intelligence to try to identify changes in mental status that could make an individual more prone to violent behavior." So the government creates this "sensor suite" that can sift through data from personal smart devices and find markers that can predict a risk of violent behavior. What then? What do you do when you have a way to roughly predict violent people from web searches and certain words overheard by your Echo? I guess in theory you can use the markers to try to find those same people through non-invasive ways, like by giving psychiatrists ideas to focus on in therapy. But how many shooters would that actually stop? It's much easier to have computers spit out some names for surveillance.
  14. I hear you. But they could have done a lot more to stop Trump. From what I could tell, they started doing some things to mitigate some of the damage Trump is causing. But they were never willing to put their own interests at risk by doing what it really takes to oppose Trump.
  15. I've never mentioned that program because I've never heard of it. I know about HARPA because I read it in the Washington Post. It has nothing to do with what administration it was; I roundly criticized the Obama administration for the activities that Edward Snowden revealed and even said I would vote for Rand Paul in 2016, in no small part because of this issue (which I now regret after realizing I was foolish for believing him to be for real). You literally have no other gear for debate other than "You only care when it's a Republican!!!" And the government's ability to read facial expressions through video surveillance is very concerning. I'd be shocked if the data from that program hasn't already made its way into alternative uses. Why would you think that "DCAPS" justifies the development of even more advanced tools to identify people with aberrant thoughts? If that were the standard, there should no limit to government intrusion because of the things Snowden exposed. We need to think about privacy more now, not less.
  16. No, my assumption is there is a significant likelihood that tools developed by government agencies will eventually be used by our intelligence organizations and DOJ when useful to their missions. And that the risk is almost a certainty when the very purpose of the tool is to prevent crime before it happens. This is, by design, the government funding and developing an artificial intelligence system to comb through data from personal computing and listening devices to identify abberant people with mental problems. If NSA were developing it in-house, you'd freak out. But it's all good because HHS is developing it out in the open, with greater access to smart people at research institutions and tech companies? In all seriousness, once the system is developed at HHS, what do you expect it to be used for? Especially when there is an outbreak of shootings and people are desperate to stop them before they happen? For what does the government then use an artificial intelligence system designed to analyze our web searches and physical movements and the things we say in our home? And, if there's something worthwhile that HHS might use it for, how do you stop other, more dangerous government organizations from using it?
  17. Thanks. In the video posted earlier, the guy appears to submerge the tip for the first ten seconds or so, then brings it to the surface with the hissing for about 30 seconds or so, then submerges the tip again at the end for 10 seconds or so. Is that the right process? I've gotten pretty good at getting the volume and the whirlpool, but I'm not sure I'm bringing it all together correctly because I can't do any art. When I pour the completed product in, I first circle it around to mix in, but then I can't get anything to stay on the surface except for, suddenly, a thicker foam at the very end that doesn't look like the light foam art I get at shops.
  18. Not just coordinating private research. Storing data and developing artificial intelligence systems to comb though Google Echo etc. data for signs of mental illness.
  19. And this kind of data already raises enormous privacy concerns warranting extensive oversight. I don't want my cholesterol data being used to used to my personal detriment with respect to health insurance or health care. But there's a big difference between striking that balance for disease prevention and identifying mental illness, literally through listening devices, to prevent crime. The amount of exercise I get each day as tracked on my fitbit is one thing, and even that private info poses major concerns. My interactions with my Google Echo (if I had one) is something else entirely. The former risks sweeping up location and physical health data at most. The latter sweeps up words and ideas by design, for public safety in an expressly criminal context. And you're deluding yourself if you think our intelligence and criminal justice organizations can't misuse the information collected in a DARPA-like initiative.
  20. No, my evidence is that the program explicitly proposes the very thing I am concerned about. If those other entities you point to are using artificial intelligence to identify aberrant people with data collected from their home listening devices, let me know the name of the programs so I can oppose them too.
  21. At least we all know now that you're full of shit on government surveillance, along with everything else. "Advanced analytics," meaning government-run artificial intelligence to identify aberrant people before they commit crimes using data from Apple Watches, Google Echos, and Amazon Alexas. Surveillence (with a privacy promise!) and predictive AI is the same thing as government dollars funding private research into gun violence because... well, it just is!
  22. I've been on board for a few months, but no way I thought she had any chance even a year ago. A funny thing happens when people run for president. Some people seem like they'd be naturals, and they just fizzle. Other people seem like they'll be also rans, and then they turn out to have another gear and just surprise the hell out of you. I don't know if she'll win the nomination, but it's clear that Warren has another gear I had never seen before and she's gone to a new level in this primary. And I think, as people hear her campaigning, she's going to keep converting people who are skeptical based on past impressions.
  23. Ooh, it's a "DARPA-like initiative." Keep repeating that, it sounds much better than actually paying attention to the details. I mean, you don't (and didn't) have to say one word about the development of a government artificial intelligence system to identify aberrant people using their personal home listening devices. It's a DARPA-like initiative! Merely applied to using our personal health information and private conversations to identify which of us might be dangerous in the future. Don't worry. It's a DARPA-like initiative!
  24. Well, yeah. And his fentanyl deal with China was supposed to be evidence of his deal-making prowess. Its failure should be a big personal hit showing that he got taken for a fool.
×
×
  • Create New...