If youtube is still pushing racist and alt right content on to people, then they can get fucked. Why should we let some recommender system controlled by a private corporation have this much influence American culture and politics??
If youtube is still pushing racist and alt right content on to people, then they can get fucked. Why should we let some recommender system controlled by a private corporation have this much influence American culture and politics??
What I’m saying is, we don’t know what physical or computational characteristics are required for something to be sentient.
Now that I use github copilot, I can work more quickly and learn new frameworks more with less effort. Even its current form, LLMs allow programmers to work more efficiently, and thus can replace jobs. Sure, you still need developers, but fewer of them.
Why is it that these sorts of people who claim that AI is sentient are always trying to get copyright rights? If an AI was truly sentient, I feel like it’d want, like, you know, rights. Not the ability for its owner to profit off of a cool stable diffusion generation that he generated that one time.
Not to mention that you can coerce a language model to say whatever you want, with the right prompts and context. So there’s not really a sense in which you can say it has any measurable will. So it’s quite weird to claim to speak for one.
While I agree that LLMs probably aren’t sentient, “it’s just complex vector math” is not a very convincing argument. Why couldn’t some complex math which emulates thought be sentient? Furthermore, not being able to change, adapt, or plan may not preclude sentience, as all that is required for sentience is the capability to percieve and feel things.
If you are just getting started, this is a good resource for learning hiragana and katakana.
Past that, I used Anki and Bunpro for learning vocab and grammar. However, an alternative to anki for vocab that’s definitely worth checking out is jpdb.io, and Cure Dolly’s youtube videos are good for learning grammar.
There are also some decks that people have on anki which have sentences that you can practice on, I hear those are a pretty good way to start reading so that you can work your way to reading books/manga and stuff.
Here’s another website that’s worth reading through if you’re interested in doing immersion learning with japanese.
Personally, I find a lot of Peter Singer’s arguments to be pretty questionable. As for some of the ones you’ve mentioned:
For one, killing humans, no matter how humanely the means, is seen by most to be an act of cruelty. I do not want to be killed in my sleep, so why is it okay to assume that animals would be okay with it? While he is a utilitarian and doesn’t believe in rights, killing a sentient being seems to me to have much greater negative utility than the positive utility of the enjoyment of eating a chicken.
Also, farming animals for slaughter will always be destructive towards habitats and native species. Even if broiler chickens were kept alive for their natural lifespan of 3-7 years instead of 8 weeks to alleviate any kind of ethical issue with farming them, there is still an opportunity and environmental cost to farming chickens. We could use that land for to cultivate native species and wildlife, or for growing more nutritious and varied crops for people to eat, yet instead we continue to raze the amazon rainforest to make more land for raising farm animals and growing feed. De-densification of farms would only make the demand for farmland even greater than it already is.
Finally, the de-densification of farms would mean a significant increase in the costs of mear production. We’d be pricing lower income groups out of eating meat, while allowing middle- and upper-class folks to carry on consuming animal products as usual. We should not place the burdens of societal progress on the lower class.
I was under the impression that Starlink satellites are orbiting too low to meaningfully contribute to Kessler syndrome, since their orbital decay time is 5 years. Don’t get me wrong, I don’t like starlink either, I just don’t know of any long term consequences
Sam Altman is a part of it too, as much as he likes to pretend he’s not.
Section 3a of the bill is the part that would be used to target LGBTQ content.
Sections 4 talks about adding better parental controls which would give general statistics about what their kids are doing online, without parents being able to see/helicopter in on exaxrlt what their kids were looking at. It also would force sites to give children safe defaults when they create a profile, including the ability to disable personalized recommendations, placing limitations on dark patterns designed to manipulate children to stay on platforms for longer, making their information private by default, and limiting others’ ability to find and message them without the consent of children. Notably, these settings would all be optional, but enabled by default for children/users suspected to be children.
I think the regulations described in section 4 would mostly be good things. They’re the types of settings that I’d prefer to use on my online accounts, at least. However, the bad outweighs the good here, and the content in section 3a is completely unacceptable.
Funnily enough, I had to read through the bill twice, and only caught on to how bad section 3a was on my second time reading it.
It’s not exactly proof, but this graph seems to support that claim to an extent.
I don’t think a recursively self improving AI (a la a singularity) is something that will be made soon, if ever, especially as we push the limits of available computing power. There’s no such thing as infinite exponential growth in reality, as there’s always an eventual limit to growth.
I think AGI, in some form, could possibly happen relatively soon (like next three decades or so), but I’m not sure it will be of the recursively self improving variety. Especially not the sort that magically solves all of humanity’s problems.
He co-invented PDF in '91. His PhD thesis, referenced in the summary, is a solution to the hidden line problem in computer graphics.
I switched to vertical tabs in every program that i could, and I think it might have actually made me a little more productive. Visual studio has an option for it, and I highly recommend using it if you use VS. I can have a bunch of different tabs open so that I can quickly reference them if needed.
I don’t like SBF at all, but I also think veganism should be a respected ethical position. Just like how I don’t like Caitlyn Jenner, but I’ll still use her preferred pronouns.
I’d personally consider it pretty cruel and inhumans to force someone to violate their own ethics on a daily basis.
I disagree with your interpretation of how an AI works, but I think the way that AI works is pretty much irrelevant to the discussion in the first place. I think your argument stands completely the same regardless. Even if AI worked much like a human mind and was very intelligent and creative, I would still say that usage of an idea by AI without the consent of the original artist is fundamentally exploitative.
You can easily train an AI (with next to no human labor) to launder an artist’s works, by using the artist’s own works as reference. There’s no human input or hard work involved, which is a factor in what dictates whether a work is transformative. I’d argue that if you can put a work into a machine, type in a prompt, and get a new work out, then you still haven’t really transformed it. No matter how creative or novel the work is, the reality is that no human really put any effort into it, and it was built off the backs of unpaid and uncredited artists.
You could probably make an argument for being able to sell works made by an AI trained only on the public domain, but it still should not be copyrightable IMO, cause it’s not a human creation.
TL;DR - No matter how creative an AI is, its works should not be considered transformative in a copyright sense, as no human did the transformation.
Out of curiosity, I went ahead and read the full text of the bill. After reading it, I’m pretty sure this is the controversial part:
SEC. 3. DUTY OF CARE. (a) Prevention Of Harm To Minors.—A covered platform shall act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate the following:
(1) Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
The sorts of actions that a platform would be expected to take aren’t specified anywhere, as far as I can tell, nor is the scope of what the platform would be expected to moderate. Does “operation of products and services” include the recommender systems? If so, I could see someone using this language to argue that showing LGBTQ content to children promotes mental health disorders, and so it shouldn’t be recommended to them. They’d still be able to see it if they searched for it, but I don’t think that makes it any better.
Also, in section 9, they talked about forming a committee to investigate the practicality of building age verification into hardware and/or the operating system of consumer devices. That seems like an invasion of privacy.
Reading through the rest of it, though, a lot of it did seem reasonable. For example, it would make it so that sites would have to put children on safe default options. That includes things like having their personal information be private, turning off addictive features designed to maximize engagement, and allowing kids to opt out of personalized recommendations. Those would be good changes, in my opinion.
If it wasn’t for those couple of sections, the bill would probably be fine, so maybe that’s why it’s got bipartisan support. But right now, the bad seems like it outweighs the good, so we should probably start calling our lawmakers if the bill continues to gain traction.
apologies for the wall of text, just wanted to get to the bottom of it for myself. you can read the full text here: https://www.congress.gov/bill/118th-congress/senate-bill/1409/text
Idk, after having been in the crypto space in the past, I’m still pretty tempted to call it almost universally a scam.
Regardless of the environmental impacts (which has been solved by some blockchains, like you said), I just think it exposes users to a completely unacceptable amount of risk for very little gain.
You’re required to be in complete charge of your own data security, and if your private key is stolen, you lose your life savings with no recourse. If you make a minor slip up and give permission to the wrong website, you’ll lose everything in your hot wallet. If there’s an error in a smart contract you use (which has happened many times), then all the money you’ve given to it could be taken from under your nose. You can’t even, like, refund transactions – there’s no consumer protections at all.
But like, to what end? What’s the actual benefit of using crypto? Sure, you can make anonymous transactions with XMR, that’s a tangible use case. But what’s the actual benefit to using something like Ethereum?
I am one of those people who’s pretty concerned about AI, but not cause of the singularity thing. (the singularity hypothesis seems kinda silly to me)
I’m mostly concerned about the stuff that billionaires are gonna do with AI to screw us over, and the ways that it’ll be used as a political tool, like to spread misinformation and such.
Then why is animal abuse a crime?