You are so full of it. I am going to stop with this from the Open AI website “We are currently in private beta, please describe your use case or product below to join the waitlist.”
I will not be wasting my time with your comments from here on.
I think I am going to give up on this thread here. The idea that you ignore ethics until you start making autonomous agents is immoral. The systems that AI is already used in are ALREADY autonomous if you draw the boundary around the system at the appropriate scale.
I don’t see the like button as an issue at all… but what can and is done with the data collected through it can be antisocial, predatory, or even beneficial for society. It’s ambiguous in its function, to the point that maybe there needs to be some regulation in the U.S. about what is allowed to be done with such collected data.
Saying the front end developer and backend developers that creating the dataflow for a ‘like’ button should be responsible for what is ultimately done with that data… that perspective seems way overly simplistic, it disregards the nuance of reality, and (I feel) tries to force things into too binary of a view… again, all of this being a product of how our brains are always seeking to use the fewest calories to arrive at a result (classification, regression, planning, understanding the ‘danger’ around us, etc.). It doesn’t mean that the most efficient view of the world is the most correct, however. It just means our brains found an easy shortcut by which to bucket things into categories.
Out of curiosity, may I ask what your background is? That might help us all to better reply to whatever your personal or professional experience is.
Again, I repeat my statement that you are not fully aware of how Machine Learning in general works. Your arguments are completely non-technical in nature
Again, GPT3 the model, the structure, the research are fully public. GPT3 uses the same self-attention structure that was used by GPT2 - the basics haven’t changed, only the size of the model and data. GPT is a causal Language Model that puts it at odds with its MLM brothers, but that is about the differences there exist.
What is not public is the pre-trained 175B checkpoint. since without checkpoint, you can’t fine-tune the model on a downstream task you can’t use it for your own domain.
But since the model’s 13B checkpoint is there (GPT2), you can scrape your own corpus of fake news and use that to further some propaganda.
Thus, there was no ethics-related reason for OpenAI not publicly releasing the 175B model because other models with far less parameters can easily counter GPT3 in their own domain.
That is because GPT3 was meant to be a jack of all trades (domains) that can do multiple things - from stories, to articles, to medicinal publications.
but for a non-state actor who just wants to spread fake news, his own fine-tuned model would surpass GPT3 in making the fake news because it is limited to only one domain which it can utilize to its full capacity.
If you have any more doubts, I can send you some resources where you can read up about transfer learning.
Because you can’t see the issue does not mean there is not an issue. Typically when someone says there is an issue it is because there is an issue. Consider how reducing people’s possible reactions to a literal binary option reinforces binary thinking.
Quite the opposite, it encourages front end developers and backend developers to educate themselves more broadly and consider the nuances of reality beyond their immediate job function and pay check.
Ideally the cogs would not want to defend being cogs, but I understand this is not comfortable.
Regarding this thread it is mainly a limited background in social-psychology. I am trying to stop interacting on this thread. You are welcome to continue the conversation via PM if you like.
Your argument here is that the world is full of harm, and computers are being used to automate that harm. We all already agree with you about this. However, this argument is unfavorable to argue because the solutions are not easy. If they were easy they would already have been done. Most of the solutions involve legal, political, or societal action.
Also, this argument is boring because it relates to weak AI which is not the topic that we’re interested in here on this forum.
The Bailey.
A more desirable but harder to defend position.
Your argument here is that strong AI should have a system of morals which is beneficial to us humans. This position is desirable for many reasons. At face value it is a reasonable thing to ask for and it does appear to be in direct conflict with Jeffs new book.
Unfortunately this argument is difficult to defend. One problem with this argument is that it is short and vague. “Undirected” simply means “not directed, planned, or guided”. And “beneficial” is problematic because it raises the question “who should benefit and how?” I looked through the FLI website and I couldn’t find any rational or longer form explanations of these ideas.
Another problem is that most people who study AGI have already considered the ethical issues and come to their own conclusions. Personally, my conclusion is that AGI is a mixed bag. It will surely have both positive and negative effect on the world. It will introduce new ethical issues which did not exist before. And on the whole, I think it will be a boon to humanity. That’s why I see it as a moral imperative for me to study AGI, because I see it as the most the best possible use of my time.
Ah, if only folks didn’t need to work for a living, had no debts, and no financial obligations. From experience, it takes a solid footing to be able to say “No, I won’t do this.” to an employer… something which I fear most of our fellow humans lack in most circumstances.
There are more money spend and people working on the “Ethics” of AI than on the actual technology. It also doesn’t help that most of them are chiken-little’s and doomsayers. That’s why most of the time I don’t take them seriously.
Plus you can have all your ethics committees and laws and they wont do any good.
It always amuses me how people think if you make a LAW then the problem is solved. The only way laws succeed is if the wider public agrees with them.
You need just one bad actor and that will end all the highfalutin ethics castles you built.
The only road is convincing people. What I see from the “ethics-people” as with every complex problem that need to be solved is authoritarian approach, which is why technical people snare at them.
Also the current “everybody has his own truth” crowd in the Intelligentsia are not capable coming up with Ethics standards
Ultimately, I think this is why regulation is needed. The threat of fines and reputational loss seems like the only things that’ll work. Put the burden of not messing about on the executive and board level, with pressure from the stock holders. Otherwise there’ll just be Google/Facebook finding cause to remove folks who may internally raise a stink. If there’s a regulatory framework for whistleblowers with regards to violations of those regulations, however, there’s a both protection and an additional reason for corporations to engage in enforcement.
As for the crowds that say that doesn’t work, I can tell you I spent many 10’s of hours while working for a financial corporation going through classes, taking tests, all for the purpose of trying to avoid violating regulations. Regulations work… otherwise why would there be all the bellyaching from specific politicians and lobbyists about them?
You are dreaming, right :), you cant be that naive.
Do you think the military or authoritarian governments care about regulations.
As for big companies, they write the regulations.
I’ve seen that it works. Work in finance for a while. It isn’t perfect, but it’s certainly better than nothing at all. I’ve also seen it in medical device firmware. There is a working model for how to do this.
So… because some people might ignore it, we shouldn’t do anything at all? Not even try? We should all then be chasing for the lowest-common denominator, a race to the bottom?
Sometimes, perhaps. But again, why would big companies be complaining about regulations if they didn’t work or didn’t matter?
It works for food safety, financial regulation, automobiles, building construction, and any number of industries upon we we decided there needed to be a referee, most frequently in the form of government. I sincerely apologize if you’ve bought the myth of ineffective government… I guess I’m not quite that pessimistic yet.
I know you are avoiding responding to this thread, but if others are interested in discussing this point. My thought is perhaps that could be the case, but one could also draw the line earlier than when you are still trying to work out the basic cortical algorithms that will eventually be used in an AI system.
For example, what about libraries that you are using? Or the IDE that you are writing code in? These are also tools, just like HTM is itself in its current form, and could also be used for nefarious purposes. HTM in its current form has no agency to behave morally (or behave any way at all for that matter). It could certainly be used in immoral applications (for example, predicting an individual committing illegal behavior could easily suffer from biased input data). What agency would Numenta have to prevent such a use of the TM algorithm? How would the creator of a hammer prevent someone from slugging another person upside the head with it?
There seem to be two different ways that “beneficial” is being interpreted here. One is a tool being used in a beneficial manner (this is the one which is also pretty difficult for an innovator to do much about), and the other is the tool itself (an agent) behaving in a beneficial manner. The latter is a property of a particular instantiation of the tool (not the source code of the tool). IMO, it is on the person doing the instantiation to follow the proper ethical principles. This will likely include the original developers at some point in the project, because it would be pretty difficult to get the coding of an autonomous agent right without instantiating it in some way.
Which means, when the developers reach that point in the project, it is probably an imperative that they should have some expertise in this topic, so that they know the importance of implementing things like explainability, etc. where feasible. At the very least, those of us who are closely following their work should have this expertise, and call out any issues we spot.
Ethics is not the same as morality. Ethics includes (but is not only) metaethics, normative ethics, and applied ethics. Morality can be thought of as the limits of your good and bad behavior. When you try to justify your morality you are typically appealing to a particular interpretation of a particular normative ethics although most people would not know that is what they are doing.
When you try to solve the problem by saying what Numenta should do you are just espousing your moral preferences. What appear to be obvious criteria may not be the right criteria in a new circumstance. For example, drawing an analogy with hammers and weapons or hammers and intelligent systems is not necessarily coherent. Ethics can provide a way (i.e. a method not a moral code) to evolve your morality so that it can get beyond the socially conditioned norms you have been given.
This is partly why I have not said what Jeff should do and I tried to point out a concern about the moral framework he assumes i.e. a question of ethics. The assumption that all tools/technologies are equal in regards to ethics is simply naive if you study even a little bit of ethics.
This thread has run its course in part because nobody is discussing the actual topic. The technical term would be unconscious incompetence. If someone has managed to shift to conscious incompetence due to this exchange then it will have been worth it, I guess. I have not seen any sign of that so I give up
Unfortunately, reading the FoL AI Principles that you linked to doesn’t get me there either. I only see moral imperatives there (most likely because I am using different reference frames than you are). I suppose I’ll leave it up to you and any others with the correct reference frames to watch this project progress and let us know when we are going wrong
Yes we certainly wouldn’t want to take the time to learn about it ourselves Well, at least you didn’t write a book about the topic! Given that such a large part of TBT is concerned with ethics it will be interesting to see if Jeff engages with that community (or vice-versa).
I have spent time on it, and I do not see the distinction you do, sorry. You are an expert in your field, and I am in mine. We’ll have to leave it at that.
It is not your fault and I have nothing against you. But this is the perfect demonstration of problems in our education system. You have been isolated from domains of knowledge and I suspect that is in part because it would be disruptive if the cogs had a view on the wheel.