Can AI Be Trusted? With J. Kevin Foster
Can AI be trusted to make values-based ethical decisions? Will AI bring its own biases? These are interesting questions that will only increase in relevance as AI continues to advance throughout the years. We cannot know for certain how much of an impact AI would have on our business decisions, but we can never be too prepared. J. Kevin Foster, CEO of Business Ethics Advisors tackles this topic with host, Chad Burmeister. Kevin's personal experience of being caught up in a federal crime and being imprisoned for 37 months led him to a passion to help other people in business avoid unethical slips and stay out of prison. Listen in and be part of this ongoing conversation that will reshape the face of business ethics in this hyper-digitized world.
Listen to the podcast here:
Can AI Be Trusted? With J. Kevin Foster
I’ve got a guest that is a topic that I've breached on only a few times in the first 60 or episodes of this show. When the questions were submitted, I was excited for this conversation because Kevin Foster is the CEO and Founder of Business Ethics Advisors. As AI comes onto the scene, whether it's AI for sales, AI, for manufacturing, AI for HR, you name it, the question around ethics is going to be glaringly obvious. If you haven't seen some of the decisions in politics or everything going on in the world nowadays, I guarantee you a lot of is around AI and it's around the questions around ethics of how that AI is being used. Kevin, welcome to the show.
Thanks a lot, Chad. I'm pleased to be here. This is going to be a lot of fun here.
What an interesting business that you're in. Let's start with getting to know you. Did you know in college that you wanted to be the CEO of Business Ethics Advisors?
Let me tell you a little bit about my background and where my passion lies and it all makes sense here. One time I was a corporate executive with a real estate development company and our company ran into a lot of legal problems. We were developing a big project down in North Carolina. One day I was coming back to Atlanta, where I live in Florida, as the plane was touching the ground, my cell phone rang. I turned on and it was our corporate attorney and he read me a national press release that was issued by the North Carolina Attorney General's office announcing they had filed a lawsuit against our company. The last sentence read that US Attorney's Office in Charlotte and the FBI were investigating the matter as fraud.
My name was mentioned and that was the beginning of a long ordeal that I would not wish are my worst enemies. As it ended up being, the US attorney went through all the officers in the company including myself. I ended up pleading guilty to a federal crime, spent 37 months in federal prison. Two weeks after reporting to prison, I was thrown into solitary confinement and it was in the solitude of that cell and in my moment of despair that I doused myself that no other person should have this experience. I vowed I would do all I could to help other white-collar professionals avoid unethical slips. Here I am, I founded this company with the promise that I would help others stay out of prison.
That gives me the chills to hear your story. I'm sorry you had to go through that. At the same time, I'm not because that's what caused you to get to where you are. You have to look back at experiences like that and say, “That's part of me and now I get to help other people with my story and my experiences.”
I do with pleasure and knowledge being that I'm helping others to keep their families or finances, their companies. You read about it every day and you read about different corporate executives, whether it's been the guy’s been caught up in the #MeToo Movement or people who have embezzled or people who have hidden accounting issues in their companies. There’s almost anything that could happen in order to draw the scrutiny of the deal.
I'm sure something that you've heard of, and if not, I'll be amazed. The trolley car dilemma, are you familiar with this? The trolley car dilemma says you're the conductor on a train and there are two tracks. You can either choose right or left. The train's not going to stop. There are no brakes. On the right hand side are five people in the tracks. On the left-hand side there's one. If you're programming an artificial intelligence to make that decision, what do you tell it to do? There are many things that the human brain can think through and ask questions about in the real-time. You have to start thinking about who programs the AI? How do you make those decisions? That's the high level order.
Now let's go to the next level. The next level says, “Now those five people are in orange jumpsuits and they're on a chain gang from prison. Does that change the decision-making process?” Now the one person on the left is someone in a cap and gown that graduated from high school or college. Now the one on the left is someone in your family. You think of all of these complex decisions that you have to make in life as a human. As we start to move into the world of humans powered by AI, who gets to play judge, jury and execution on those kinds of decisions? It’s complex.
With AI, everything is only as good your algorithms. I think there's going to be a hard decision-making in a lot of companies on how these algorithms are being written. It's like the first thing is that we don't want to be causing any physical or digital harm to people that are out there. We want to be able to keep confidential information confidential. That AI is going to have to be transparent and explainable. It's going to have to be the fair and impartial. That takes a lot. People ultimately are going to have to be responsible and accountable for that. These companies are going to have to figure out who in your organizational structure is going to be responsible for writing these algorithms. They're going to have to be accountable for those different decisions that you’re describing.
If you’re not paying for a service, then you’re the product.
I'm thinking of it as you have a CEO, the Chief Executive Officer, and now you have a CEO two, which is the Chief Ethics Officer. There a book on ethics that says, “Here it is.” My personal view as a Christian is it all ends up coming back to do unto others as they would do unto you and love your neighbor as yourself. Where's the ultimate ethics book? Who gets to decide what rules and regulations go into the algorithm?
I teach values-based ethics, not rules-based ethics. Rules-based ethics would be the dos and don'ts that are typical in a code of conduct, written policy in most major corporations. Even when you have rules-based ethics, everybody's looking about how they're going to be able to gain those rules-based systems. It comes down to those personal characteristics that people have, those moral decisions that they make that a computer can't make, that's getting made by an individual. We're talking about honesty, integrity, trustworthiness, empathy, compassion. Those are the type of safeguards which you are referring to with your Christian-based standards. It's not Christian-based on any type of religion, these are the common things of, “Let's do good with others,” that have been around for all of humanity.
There's a certain way that we can treat one another and treat ourselves that makes sense. You as an individual are going to have to be responsible and accountable for those particular actions. When you're looking at AI, someone is going to have to look at how those AI systems are written saying, “Raise my hand and say, ‘I'm accountable for this. I take responsibility.’” Who in these organizations are getting to take responsibility for this? The buck's got to stop somewhere. You're right, it's generally going to be the CEO. The chief ethics officer is almost like the top in here. Those people are going to report to the board of directors and the stakeholders in any organization are getting demanded.
We have all of the big tech being hauled in front of Congress. Those are all AI-driven organizations that are there for one reason and one reason only, which is to sell ads, to sell personal data, to be able to project how a consumer or company is going to react to a particular situation. That's what they're being paid for. If you're not paying for a service, you're not for Google or for Facebook, then it's the consumer. That is the product. We've got to figure out how we're going to be able to protect people. We're going to have to be respectful of privacy no matter how you're leveraging that customer data, the consumer data, how you're able to respect it and even how to allow customers to opt-in or out of sharing the data.
I heard an interview between Henry Schuck, the CEO of ZoomInfo, formerly DiscoverOrg, they merged. They're one of the biggest data providers in the world. Multi-billion-dollar publicly-traded company now. Dave Elkington, the former CEO of InsideSales.com, which is now XANT. These are both AI-driven tools. When Henry was asked the question about ethics around AI and data and personal information, he said, “The government might put a stake in the ground around these are the things that you can and can't do. The GDPR in Europe and then the California laws around sharing personal information.” He looked at it from a values-based perspective as a good leader should and will. He said, “I'm going to be transparent with my people who are in my dataset and send everybody an email and connect with them. It’s so you know, you're part of our dataset. If you want to continue to be part of our dataset, here's the values of it. If you don't, then you can choose to not be part of the ZoomInfo dataset.”
I thought that was a good value-based decision because he didn't need to be regulated by the government that could show him directionally-correct decision-making. We need to as humans follow our own values. To your point, as CEOs we don't know all the nooks and crannies of things that may or may not happen out there. I think working with people like yourself to help us understand what are the guidelines and what should we be doing as good corporate citizens and corporate leaders.
I do appreciate you mentioning that example because that clearly shows a corporate executive who is transparent. He is trying to explain to his customer base how their information is being used. I think that is important because once they're able to explain and be transparent, then they can start being fair and impartial on the use and make sure that everything is equitable across all participants. Those participants are going to be able to demand it. They're also going to be able to confirm that the systems are reliable, that this is how it is.
Robots cannot make the same values-based decisions that we have but robots can be programmed to make different decisions based on value trees that were designed by humans. All of that is particularly important. I think that the American consumer and probably worldwide is been desensitized to all the data that we've been given up. If they're fully aware of the fact that you use this particular platform, we could charge you for the use of this platform. We can charge you for every post that you make or for every search that you make or we cannot charge you but you're going to have to watch these ads. We're selling that data but we think that you benefit from that data.
You could choose the checkbox of, which parts of that equation do I want to allow or not allow?
The other solution that hasn't gained any traction, one that I've been pushing is you can pay consumers. Google and Facebook pay consumers for using that personal information. If you're the product then you pay a manufacturer for building that product for you. Why not pay the consumers for using their personal data?
If I go buy a Land Rover, which my wife's in the market for, if one of those ads pop up because of the AI and I go buy that car, I should get a kickback. What I'm hearing you say is that the AI is only as good as its creator because you think of the Terminator of a moving and human-like decision-making thing that can learn over time. I don't think we're to that point where the AI learns too well yet. It's recognizing the differences between 0s and 1s.
All we have to do is look at Twitter and Facebook with the news of The Washington Post article, for example, regarding Joe Biden. No matter what side of the political spectrum you’re on, you should be appalled that their algorithms are written to stifle free speech. Granted that they're not a government agency, they're being treated, being paid and people sign up for the free flow of information. Those algorithms out there are quashing people's ability to make posts or to be able to share those posts based on an algorithm written by somebody who has a different agenda. That's a clear unethical breach of AI in my book.
I'm sure it will shine poorly on some of those tech giants. I've certainly talked to people who are saying, “I've deleted my Facebook account or I'm moving to another platform or those kinds of decisions.” I have a feeling the winds of change are in the air and the tribe has spoken here soon like they say on the TV show.
I think that's probably true. People started deleting their Facebook accounts. I think that would change quickly. I wrote a blog, one of Mark Zuckerberg’s texts became public from discovery of a lawsuit that he was involved in. I'm summing it up here. I don't have the exact text language, it essentially said, “Who cares about ethics? Ethics is all about what you can get away with.” For a CEO of a major tech company to say that is unbelievable. It wasn't too long after that, Facebook was fined $5 billion for violating the agreements that they had made with the federal government for privacy problems they had previously. They had settled, violated it, they get sued and have to pay a $5 billion fine. If the CEO of your corporation doesn't believe in ethics, what does that say to everybody else down below, the people who are writing those algorithms and are the so-called fact-checkers?
I heard they’re located mainly in China are the fact-checkers that came out here as well.
I did not hear that. We're sitting on Zoom. I was on a Zoom conference with somebody who is a contractor with NASA and they were telling me the federal government prohibits their contractors from using Zoom because all of Zoom’s servers are in China. For national security purposes, they don't want it to go through China. To be seen with what's happened with TikTok and WhatsApp, I think that all these companies are going to have to build silos of their data and not allow these foreign bad actors to be able to have access to that data.
When I was with Webex from 2005 to 2007, we bought a bunker in Santa Clara of the dot-com bust. We were able to get it for $0.08 on the dollar. This was a hardened bunker that you had to scan your eye to get into this location. I think Randy Barr was the head of security at the time. He walked us, the new hire class, through this facility. He said, “I don't know if I'm supposed to tell you this, about a month ago we had one government agency come in and say, ‘We need to spy on some of these meetings that are going on through your network.’ A week later, another agency came in and goes, ‘We need to spy on the people spying on the meetings from the guy.’” One government watch group watching the other government watch group. That was back in ‘05.
I keep thinking every year that we get further into the future, the level of AI, the level of technology is we're at an interesting spot. Let's go to the future. I'm going to pull you back to make sure that your story gets out of how do people avoid the situation you went through because we're talking about AI and futuristic things. I think there's some things we can probably do that are basic that will keep us out of jail. Let's talk to this one first, tracing apps. It was put on your phone if you didn't know it and you have auto updates months ago.
I haven't seen that it's been fully implemented yet, I have to believe now that if I'm in a certain mall or at a church or whatever, somebody has COVID-19 and I come home, they should be able to send out an APB now and go, “You were in the same place and vicinity of this person.” It feels to me that we're now all becoming and already were but we’re becoming more endpoint on the network. It’s like a light bulb is an endpoint on a network. Everything is becoming RFID-chipped. The traditional conspiracy theorists would say, “We're going to get the chip.” I'm hearing more and more that's where things are headed. Is that an okay thing from a business and ethics perspective? Should our people be clipped from The Matrix?
People need to have ethics almost as a second nature.
They say that's why South Korea was able to do such a great job in curbing COVID-19 was because they were intrusive on everybody's cellphone. If you owned a cellphone in South Korea, you had those tracing apps on you. In America, if you said that the government knows exactly where you were or what's up to 3 feet, I think we'll be looking at another American Revolution. It wasn't all that long ago where the NSA was taking a look at Medicaid, it wasn't even individual conversations. It was general calls going back and forth.
If you were told that, if I was told that the government had access to where I was within a 3-foot diameter, I think I'd be upset. I'd be looking for a new government. Please don't think America will put up with that. I think that’s certainly what Democrats want to go with this contact tracing. They want to be able to identify everybody where everybody's going to. I don't know if I’d be able to do it. It's got to be done by AI otherwise you're talking about hiring hundreds of thousands if not millions of people to contact trace manually.
I have a friend that's been tracking this for a lot of years. He's interviewed over 250 people on all kinds of government conversations and things. He is convinced that it will get to a point soon where the way that you're forced. You're not forced. It's not like, “We're going to fire you, evict you or whatever.” The way it is, “You want to get on an airplane? You got to be red, yellow or green.” Therefore, in order to be red, yellow or green on your phone, we have to have the contract tracing turned on. “You want to go shopping at Home Depot? You bet, you got to have that app turned on.” It's an interesting dilemma of what's coming and from all the different people that he's talked to, he thinks that's where the economy and world's headed.
That may be true. What I've heard is that some people are advocating this national testing system that if you get tested for COVID-19 and you want to travel, your COVID-19 tests, assessments, they would say it's 45 to 48 hours currently. You had the test and then the test results download to your phone. You go through and then you be able to show your current test results on your phone. That's probably more palatable to people because then it's something that they're voluntarily going out and getting tested and they're wanting to be able to prove. It’s like showing your driver's license.
As the tests get easier, that may become less invasive than other alternatives.
It becomes a slippery slope. Doesn’t it, Chad? We agree that we're going to take these tests. We're going to be current within 48 hours wherever we go permanently, then who's to say that we shouldn't be current within three hours based on this contact tracing app?
Why not have it injected into your head and telling the phone all day long if you're healthy or not. I do wear a watch, it shows my full SoCs, my temperature. I'm already connected to The Matrix. I don't want to admit it.
I have my Fitbit watch. It tells me when I'm sleeping and everything else that I'm doing.
This is a fabulous conversation. You experienced a traumatic event in your life and were on the front page of the paper. On that flight that you landed on that you talked about, what happened? How does one avoid running into that type of situation?
I use ethics as an acronym. Someone runs through that quick. I talk about the personal characteristics and circumstances leading to unethical behavior. This is going to make sense to you quick. The first one is E for Exaggerated ego. We have seen over again how people's egos have gotten them in trouble. I'm sure that every day you run into somebody, “That guy's got too big of an ego for any type of safety here.” T is Temptation, some people are more susceptible to temptation. Temptation is an opportunity. H is Hijacked by outside pressures, whether it's workplace pressures, alcohol, financial drugs, sex addictions, any type of addictions, somebody's on the cusp of being financially-ruined.
This is one that I clued into. The next one is I for Integrity, especially with management. It is important to ethical for companies to have management that has high bars of ethics, high integrity. If we take a look at most of the problems it has been because, take a look at Volkswagen, Wells Fargo, those senior managers signed off on unethical factors. They left generate through their corporate cultures. C is Consequences not considered. I can never see where something that I'm doing is impacting somebody else. As long we're talking about AI. This is true right here that I, as an executive, cannot gather where these algorithms that I'm using in my daily business impact someone 6, 60, 600, 66,000 miles away.
What are those consequences? How can those consequences kick back to me? Those are things that people do not consider. The final one S is what I call Stinking thinking or Slippery thinking. Everyone in my business does this. “I'll never get caught.” You see this all the time. I would refer someone to my website, which is BusinessEthicsAdvisors.com. I have a video on there which is about the slippery slope where I go to all these as an example. If you didn't catch all those six items where I go ahead and outline what those are. it is important that people have a check on themselves and take personal responsibility for everything that they do because ultimately, it kicks back to you. I was convicted of conspiracy.
The bar of conspiracy is low that all a prosecutor needs is a meeting that you attended, a text that you wrote, a document that you wrote to prove that you advanced that conspiracy in some way. They could go ahead and convict you of the entire conspiracy. In my case, that conspiracy was a $100 million. They had threatened to put me in prison for 20 to 30 years if I didn't plead guilty because they were bringing the whole conspiracy on top of me. My book was almost zero but my name was all over certain documents and the government was alleging that I was advancing this conspiracy. I was being dumped on for the entire conspiracy and most white-collar people do not realize how low that bar is and how successful they are being convicted from attending meetings.
I think of moving the ball down the field. In this case, you're not even the running back who got the ball and moved an inch. You were standing on the sidelines, you were on the team and you watched the play. Therefore, you're the guy or you were in a meeting. Maybe you didn't even see the play. We're part of it. That's a conspiracy.
On bank fraud, I never met any of those bankers that were involved. I never had anything to do with those banks. I had nothing to do with the so-called victims, which were the investors in this thing. I had never met any of them. I was still convicted. I spent more time in prison than some of the people that had direct contact with those folks.
This makes me think that there should be a new app for executives and companies that walks through all of the items, the ego exaggeration, the temptation, the hijacking from outside integrity, consequences not considered, stinking thinking. It’s like sexual harassment training that you go through, there's got to be something that says, “Take you through this set of 100 questions and we'll tell you how at risk your company or you personally are inside of a company.” Is there something like that out there that exists?
One thing I'm proud of is that I do ethical leadership mentoring for people in corporations. These are generally for next gen immersion leaders. That gives us companies as an opportunity to have those leaders ingrained with those types of decision-making skills because it is a skill. The other thing that I'm also proud of is I have weekly videos that are only 1 to 2 minutes long that companies can subscribe to that they can disseminate to all their employees. It’s where I talk about the topics of the day on short videos. They get disseminated to all their employees. Ethics are one of those type of things that people need to be able to keep on the top of their mind. There are tools out there and it's not me, there are others providing it too but I'm also providing it.
It is available out there and your company should be able to do it. The primary thing is that people need to have ethics as almost a second nature. People need to be able to think, “Joe Blow mentioned this thing about artificial intelligence where we might be possibly violating somebody's privacy rights or had mentioned something that we have a new update available on our software system. We don't do that now. We could easily be hacked.” You have an ethical obligation to keep those customer data private and to protect your company’s system. If you're not doing all you can to be able to do that, then those are ethical violations in and of themselves.
The beginning of the conversation, there's a Chief Information Officer. There needs to be a Chief Ethics Officer or a VP of ethics.
Ethics needs to be part of your corporate culture.
There typically is a Chief Ethics Officer for major companies now. If not, it’s being taken on by the Chief Compliance Officer and if it's not the Chief Compliance Officer, it’s the General Counsel. If you don't have a General Counsel, Chief Ethics Officer, Chief Compliance Officer, I don't know how you're able to stay in business. The SEC is going to be down in your butt as fast as possibly you can imagine. Your board needs to be demanding it.
Where's the cut line for when a company should take this seriously and write a check and make these kinds of investments? Is there a certain company size? Is it you're now funded? Where do you draw that line in the sand or is there one?
I think that if you're large enough to have a board of directors, somebody should be in charge of ethics and compliance. That may be the General Counsel. I know that compliance is often viewed as a cost center. When you're looking down the barrel of a gun of hundreds of millions of dollars in legal fees, buys, felonies and need me shut down the company, that is a small price to pay. At the worst, send your CEOs, Chief Technology Officers and Chief Information Officers to one of my courses and we'll be more than happy to run you through it. You need to be able to educate people within a company about ethics. It needs to become part of your corporate culture. There's no doubt about it.
My website is BusinessEthicsAdvisors.com. There's plenty of information there on how to contact me. There's a bunch of videos there as well. I have a whole source of problems and I call it a process. It's the ethics full kit formula where we get everybody talking about the same language, which is generally what I'm doing through, keynotes or be able to talk to all your people live. We have online courses. We have the mentoring and I am there for those chief ethics officers and chief compliance officers.
In my advanced programs, the master access type program where I'm there. You have my cell phone, you can call me anytime day or night and ask me any questions you wanted to be able to ask. We have different levels of membership there with our program. That's worth it. I would refer your people to my website. You can find me on LinkedIn, search for J. Kevin Foster. You're going to find me right away. I am Kevin@BusinessEthicsAdvisors.com. My website, make an appointment with me. I'd be more than happy to talk to you about any type of situation that you have.
We asked, are high ethics possible with AI? After this conversation, I'm comfortable to say that AI is run by humans. If humans can have high levels of ethics, which they can then the AI can. It's who's the man or woman behind the curtain, I think is what matters.
I think that is true. You get back and any type of ethics and compliance system, that's got to work hand in hand with AI to make sure that it's fair, transparent and robust. You're not taking the prejudice of the people who are writing it and writing into the code. The last thing you want is your AI discriminating against other individuals, especially now that diversity and inclusion has become a big buzzword within the industry.
Even reading a resume. If an AI read it and let's say an entry-level person right out of college doesn't think they have any bias but they read the name of the person, they go, “I've had some experience,” and then they move it to a different pile. Where the AI could read the resume, score it and not take into account the name of the person. I brought this up with a diversity person out of Philadelphia and he attracts companies into Philly for trade shows and events. They do a podcast on diversity. I brought that story up and he said, “What if the word eight was used in the text of the resume?” It threw it out with that. It's such a nuanced question that I don't know what the right answer is there, to be honest with you.
I could see where if somebody is trying to rank someone's education, schools and historically black colleges were at the bottom of those schools for whatever reason, I'm not saying that they are at all, that's discriminatory.
It's important to have people like you that see. We need to have more technical folks too that understand the code of the algorithm to make sure that the algorithm is not being written improperly. I appreciate on having you on the show. Thank you for your time and insights. I will be checking out the website here and I look forward to pushing this to our network. Thank you. See you on the next one.