Iron Will on Everything: A.I., and No More News
Is Hannah Bern an A.I.?
We’re discontinuing daily news – a decision which, according to our survey, the vast majority of you agree with. In this first episode of Iron Will on Everything, Will explains why. Also, what we should and shouldn’t fear about A.I.
2 Comments
Leave a Comment Cancel Reply
You must be logged in to post a comment.
(0:01 - 0:26) Since I brought Hannah on board as my co-anchor, I've had a couple of people ask on the website is Hannah an AI? She seems kind of robotic. I'm going to answer that question in a couple minutes. About 20 years ago, there was a book written. The title was All I Really Need to Know I Learned in Kindergarten. I've never read it. But I thought it was a cool title. (0:26 - 0:39) And I thought if I ever wrote a book, the title of my book would be All I Really Need to Know I Learned in the Gym. Because I have learned a lot of things from being a gym rat for 40 plus years. I learned about self-discipline. (0:39 - 0:59) I've learned that most of the major changes that we want in our lives are the result of small things we do every day. And I've learned from watching other people. Every January, there's a flood of new people in the gym. They're the New Year's resolutionists. This is the year I'm going to get in shape. And by April, most of them are gone. (1:00 - 1:11) Now, you're probably thinking a gym rat like me looks at those people and says, ah, quitters. Well, actually, just the opposite. I think their decision to quit is entirely rational. (1:12 - 1:29) Why would anyone do something that's inconvenient, uncomfortable, and isn't working? That would be insane. Some other time, maybe I'll do a video on why most people don't get the results from exercise that they want. But that's not important today. (1:30 - 1:49) What's important is that their decision to quit is the same reason why we're no longer going to be doing daily news on the Ironmire. It was inconvenient, uncomfortable, and it wasn't working. See, back in 2022, when I launched Iron Will Report, I thought we had to do news. (1:50 - 2:11) We're an independent media outlet. Of course, we have to do news. You ever look back on some of the decisions you've made in the past and ask yourself, what was I thinking? In any subscription-based business like the Ironmire, people are going to leave. You're going to get attrition. Sometimes people lose interest. They find something else they like better, or maybe they have a financial disaster. (2:12 - 2:20) I've heard that from some people where they couldn't even afford the small fee that we charge. That's okay. That's just part of a business like that. (2:20 - 2:44) As long as you have more people signing up and you have people leaving in time, it builds up. Since we started doing daily news, we've lost more people than we were bringing on board. And I think it might have something to do with the answer to the question, is Hannah Byrne an AI? The answer is both no and yes. (2:45 - 3:48) Hannah Byrne is a real person. She lives in Toronto. She's somebody I've actually worked with in the past. For those of you who have been following us for a long time, who maybe remember that Friday show, the satirical bi-weekly comedy show that we were doing way back in 2021, Hannah helped us out with that, with the Newsbusters segment. I ain't afraid of no truth. Welcome to Newsbusters. I'm Jenna Verity. Unvaccinated people pose a huge risk to vaccinated people. This, according to a recent study published in the Canadian Medical Association Journal, or CMAJ. (3:49 - 4:20) The study, headed by David Fissman, was widely published in almost all Canadian newspapers and medical websites around the world. It clearly shows there is a huge risk to the vaccinated when mixing with the unvaccinated. Fissman, a qualified professional who has served on advisory boards for several pharmaceutical companies, including Pfizer and AstraZeneca, feels that his study is accurate and stands by his work, as do the other two scientists who worked with him. (4:22 - 4:39) Correction, one scientist and one student. Correction, one student and one former Canada Health employee. This study has come under fire from other medical professionals who say the study should never have been published. (4:39 - 5:05) They contend the study would not have been published if it had gone through peer reviews and ethical reviews, but was subjected to neither due to the fact that it was only a computer model and not a real world study. Regardless of the backlash, the study still remains on the CMAJ website. And sure, it is true that the study is getting decimated in the response section in a post-peer review fashion. (5:06 - 5:32) And the study uses statistical assumptions instead of real data. And the study assumes that only 20 percent of the population have natural immunity instead of the scientifically proven 90 plus percent. And yes, that if you use the real world, 90 percent, the model actually proves that the vaccinated pose a risk to the unvaccinated. (5:32 - 5:59) But ignoring all of that, the CMAJ and their partner, Pfizer, feel there's no need to remove the study. So in reality, the headline should read Pfizer and big pharma hitman David Fissman say that untapped population profit poses risk to pharmaceutical companies bottom line. In other words, the unvaccinated are bad for business. (6:02 - 6:23) I'm Jenna Verity, and that is the real news. So, yes, Hannah is absolutely a real person. But when you've been watching the daily news, you've been watching an avatar, not just of Hannah, but of me most of the time. (6:24 - 6:39) Sometimes it was really me. So why? Well, it certainly wasn't in any effort to deceive you, my very valued viewers. You know me, you know I'm all about telling people the truth, and I would never intentionally deceive any of you. (6:39 - 6:53) I didn't mention that they were A.I. avatars because it just didn't seem important because of my reasons for doing it. And when you understand my reasons, I think you'll agree. In fact, according to the survey, which many of you took the time to respond to, thank you so much. (6:54 - 7:08) The vast majority of you agree with this decision to drop doing daily news. The reasons for using A.I. avatars were the same two reasons that dictate a lot of things in our lives. Time and money. (7:08 - 7:21) Let's start with time. Hannah, as I said, really is in Toronto, and she works a full time job, which means that the earliest she could have sat down in front of a camera every day to do the news with me would have been 6.30 her time. That's two and a half hours to deadline. (7:21 - 7:33) And because I have to edit the whole thing and get it prepared to go out, couldn't have made it. And I already felt that 9 p.m. Eastern was the absolute latest we could put the news out. That's 10 p.m. for you Atlantic folks. (7:34 - 7:39) Well, I go to bed at 9.30. I don't know about you. That's getting awfully late. And then there's money. (7:40 - 7:50) We do things in a shoestring around here. We're still young. We don't have the subscription income to do anything close to what mainstream news does. (7:51 - 8:09) Where you see a news show that's produced by a crew of 15 or 20 people, they can do that because they're getting their share of the two billion dollars a year from our government to pay them to be mouthpieces. We don't get any of that. And so really, the news is being produced by one person, me. (8:11 - 8:21) And so there's no way that I could have afforded to pay Hannah to sit in front of a camera for an hour every day. And it's not really an hour. I know it's only 10 minutes that you see on average. (8:21 - 8:34) But by the time you put on the monkey suit, the makeup, because even I have to wear makeup, otherwise I get shine off the lights on my face. And you set up your camera and your lights and all that, sit down and record it. And sometimes you have to do a couple of takes. (8:35 - 8:46) It's an hour. So now we're talking 7.30 and I can't afford to pay her for the time. What I could afford to do was to pay her to license from her, her AI avatar. (8:47 - 9:08) Now, this is the point where some of you are going to be thinking, wait a minute, well, does that mean somebody could make an AI avatar of me? No, they can't. There's a lot of safeguards in place for creating these AI avatars, and it requires the active participation of the person you're making that avatar of. In fact, it requires several hours of work to do it. (9:09 - 9:28) So no, you don't have to worry that somebody can make an AI avatar of you. Which brings me to the topic that I wanted to talk to you today about AI. Should we be afraid of it? And if we should, what should we be afraid of exactly? I don't like the term AI, artificial intelligence. (9:29 - 9:49) I think it's very ambiguous. If you ask 10 different people for their definition of intelligence, you're going to get 10 different answers. But what an AI is, is it's just a smart computer program that can, and even the word smart is ambiguous, isn't it? But it can learn, but only within the bounds of what it's programmed to learn about. (9:49 - 10:13) Some of the early experimentations with AI were dune buggy races through the desert where they had AI drivers. And what they were programming these drivers to do was to learn from their experiences. Rather than programming a dune buggy to say, go around a ravine, what they would do is it would allow the dune buggy to drive into the ravine the first time, and it would get stuck and have to be towed out. (10:14 - 10:31) And from that, the AI would learn, well, don't drive into the ravine, go around it. But that dune buggy, even if you would attach say mechanical arms to it, isn't going to suddenly decide to make you a cappuccino. And that's the thing that AIs can't do. (10:31 - 10:49) And the reason why, on the one hand, we shouldn't fear them, we should on other reasons, and we'll get to that. But for right now, let's talk about why you shouldn't. When AIs first became popular and people found out about what they could do, there's a lot of people who were concerned, oh dear, we're creating Skynet, if you've watched the Terminator movies. (10:49 - 10:59) And someday those AIs are going to decide that we human beings need to be exterminated. They're going to launch the nukes and wipe us out. Not only will that not happen, it can't happen. (11:00 - 11:15) Because AIs do not have volition. And volition, I'm using this word very, very precisely. It means the ability to make a decision that is entirely independent of environmental stimulus. (11:16 - 11:23) And only sentient creatures can do that. Sentient means self-aware. In fact, there's a test for that for certain animals. (11:23 - 11:34) Are they sentient, self-aware? You put a mirror in front of them. And if they recognize that they're looking at their reflection, they're sentient. They recognize that they exist outside of their environment. (11:35 - 11:44) Humans can do it. Some smarter animals like elephants, apes, dolphins can do it. But my dog, Sam, Sheltie, and I have to say the smartest dog we've ever owned. (11:44 - 11:51) Sam's a genius. But you put a mirror in front of Sam and he sees another dog. He's not sentient. (11:52 - 12:14) And so sentient creatures can do this thing called volition. We can decide just out of our own initiative to do something that is completely independent of our environmental stimulus. And a good example of this is one of the things I've been using AI for, and that is to write code. (12:15 - 12:29) You see, some of you know that I used to be a web developer before I became an independent journalist and freedom fighter. But a web developer and a programmer are actually different things. I don't write code, but there were certain functions that I wanted on the Iron Wire site that required custom code. (12:29 - 12:49) Now, before AI came along, I would have to go to a website called upwork.com where you can hire contractors around the world. And I would typically hire a programmer in Ukraine or Poland, which is where the good ones often are, and they'll work for half of what I would have to pay someone here to do the same thing. I'd tell them what I needed and two or three weeks later, I'd get working code. (12:50 - 13:10) Well, I didn't want to wait two or three weeks. And so I used Grok and that's Elon Musk's AI, which is the one I actually recommend you use, and I had it write code for me. And the process is actually very similar to if a human being was doing it, because even a human programmer, the first time they write some code, it usually doesn't work, you'll get some errors, it won't quite behave the way you want it to. (13:10 - 13:24) And so I would have to go back to Grok and very precisely explain, this is what happened. And then Grok would rewrite it and I'd test it again. And we just keep doing this back and forth until finally I got code that worked about five or six hours later. (13:25 - 13:45) But now here's the example of volition. Sometimes in that process, and I did it several times for different projects, I'd be a couple of hours into that process of beating my head against the digital wall, when it would occur to me, wait a minute, I think there's a better way to do this. And so I would go back to Grok and I said, we're going to do this completely different way. (13:45 - 13:52) Let's pivot. Let's go in this other direction. Now here's something about AIs, if you've been interacting with them, that you need to understand. (13:52 - 13:58) They're programmed to be polite to you. Heck, they're even programmed to stroke your ego. Don't fall for it. (13:58 - 14:14) It's just a program. So every time I would do this, Grok would congratulate me on my genius and thinking of going in a different direction, happily go along. And very often what would happen is an hour or so later, I'd have my solution, code that worked and did what I wanted it to do. (14:15 - 14:33) But that only happened because I had the volition to say to Grok, we're going to go in a completely different direction. What will never ever happen is that an AI would turn around and say to me, after a couple of hours, this approach isn't working. Let's try this completely different approach. (14:34 - 14:39) It can't do that. It's not sentient. It's not truly intelligent. (14:40 - 14:54) It cannot have volition. And that's why we don't have to worry about an AI becoming Skynet and launching the nukes and wiping us all out. Unless it's programmed to do that. (14:55 - 15:11) And this is where we do need to fear AI. The globalists want to use it to surveil and control us all. Because that level of surveillance and control simply wouldn't be possible with human beings sitting there watching what we're all doing. (15:12 - 15:25) Well, heck, you'd need one person for every person who's being watched. And then who's going to watch the watchers? And even a regular computer program can't do it because it has to be able to learn from our behaviors. Regular computer programs can be fooled. (15:25 - 15:36) You can find hacks, ways around them, ways to fool them. An AI, however, can be programmed to watch for tricks and learn from them. Just like that dune buggy drove into the ditch and learned not to do that. (15:37 - 15:48) And this is why they need CVDCs and digital IDs. Everything has to be digital. This is why Mark Carney has introduced a bill to begin limiting our use of cash. (15:49 - 15:57) Because they have to take that away. That gives us freedom. If everything's digital, then it can be controlled by an AI, monitored by an AI. (15:57 - 16:25) And if you want to see how far that will go, a couple of news stories I reported on in the past year were two lawyers who worked for a firm that was suing the company that owns Radio City Music Hall in New York. They had facial recognition software tied to all the cameras in the building. And twice a lawyer that worked for that company was removed by security, not because they'd done anything wrong, just because they were working for the company that was suing the owners. (16:25 - 16:41) And in one case, that lawyer wasn't even involved in the case. And in another case, they left that poor woman's daughter alone in the building while they escorted her out. And this is the kind of thing we need to be afraid of with AI. (16:42 - 16:51) So how do we stop it? Well, you can't put the genie back in the bottle. AI is here to stay. It can be used for good things or bad things. (16:51 - 17:16) And that's up to us as human beings to decide what we do with it. And it's up to us to decide who we elect to office, who gets to make the decisions about what AI will and will not be allowed to do, and what kind of safeguards would put in place to stop it from controlling all of us, from taking from us our rights and freedoms. And so as of this week, we're not doing news here anymore. (17:17 - 17:21) I didn't need to. I never needed to. I just didn't see that until now. (17:22 - 17:40) Until I realized that not only was it not achieving what I wanted, it was doing the exact opposite. And it was chewing up an awful lot of my time that could be used for much more important things. So one of the things that's going to happen is I'm going to start doing a weekly commentary like this. (17:40 - 17:48) Most of the time, it's going to be possibly on current events. It could be on just about anything. AI was simply relevant today because of one of the reasons why we're not going to do news anymore. (17:50 - 18:01) And another thing that's going to start happening that I simply haven't had time to do is responding to your comments. Every once in a while, I've had time to sit down and read them. I just haven't had time to respond to them. (18:01 - 18:14) Well, now I will. If you leave a comment in one of my interviews, one of our shows, give me a couple of days, and if your comment requires a response, you'll get one. Because it's about all of us. (18:15 - 18:23) It's not just about me sitting here in front of a camera. It's about all of us. And we have to be a community of like-minded people. (18:23 - 18:33) That's what FreedomComms is for. Freedomcomms.org. If you haven't signed up yet, you should do so because we're building in-person freedom communities across Canada. It's absolutely free to register. (18:34 - 18:45) It's how we beat the globalists. We work together. You see, one of the things that those people can't understand because they're largely psychopaths and sociopaths is altruism. (18:46 - 19:11) People who are willing to do something to help someone else, sometimes someone they don't even know. And so by working together, even though we don't have their financial resources, we don't have their political clout, if enough of us just say no to whatever it is they want us to do, their plans won't work because they have a major problem. They don't have an army. (19:12 - 19:24) In the past, dictators, tyrants. Well, they would gain control by taking control of the military. And if they had control of the military and the military had guns and nobody else did, well, you did what you were told or they'd shoot you. (19:25 - 19:33) But they don't have that. They have to get people to comply. They do it through fear and through lying to people. (19:35 - 19:47) But if enough of us know what's going on and we just refuse to do it, it takes their power away and we do that by building communities. I'll see you next week.
Balance is important, I understand your décision. Merci from Montreal.
TY Francois. I deeply appreciate your support. I’ve said many times that there’s no manual for what I’m doing. I’m making all this up as I go along. The news was just stealing too much time that needs to be put into other things.