Algorithms and AI affect so many aspects of everyday life. Although AI can deliver extraordinary value and insight, it can also hurt people. In this episode of Working Better, we dive deep into the issue of bias in AI to illuminate the stories, consequences, and complicated ethical questions that have experts pushing for progress.

Featuring:

  • Kyle Hundman – Data Science Manager, American Family Insurance
  • Deena McKay – Functional Delivery Consultant, Kin + Carta; Founder and Host of Black Tech Unplugged
  • Maxwell Young – UX Designer, Kin + Carta
  • Nicolas Kayser-Bril – Journalist, AlgorithmWatch

Show Notes

“Alexa, play Whitney Houston as loud as possible.”

Voice assistants are a great way to demonstrate how an algorithm works. In its simplest form, an algorithm is just a sequence of steps designed to accomplish a task.

Alexa uses a voice recognition algorithm to understand I want music, that I want that music to be Whitney Houston’s music, and that I want it played at maximum volume. It moves through a carefully designed sequence of rules that arrives at “I wanna dance with somebody” playing loud enough to wake up my neighbors. As requested.

So let’s say, hypothetically, that’s how I start every Friday morning. Except, this Friday, I just say, “Alexa, play some music.” Alexa will then be more likely to play Whitney Houston, or something like it because it’s learned my preferences and can now better predict what I want to hear.

(00:48) The Pervasiveness of Algorithms

That’s just one example of how algorithms, machine learning, and AI are used in everyday life. It’s also a fairly harmless example. Which is not always the case. Algorithms are used to predict the things you might buy, the fastest route to the grocery store, your qualifications for a job, how likely you are to pay back a loan, which Pokemon you are based on your grocery list, and more. 

But, whatever their purpose, algorithms all have one thing in common, They’re designed by people. 

In case you haven’t caught a headline for the last 5,000 years or so, people are far from perfect. So when the stuff that goes in to these algorithms is designed by humans, modeled after human behavior, the output can be just as flawed. Bias in the form of racism, sexism, and other forms of discrimination become solidified in code, embedded into everyday products, and affect people’s lives in very real ways.

So today we’re going to shed a light on the dangers of bias in AI, why it’s so hard to fix, and what we can do to overcome it and help create more representative, equitable, and accountable AI.

(02:10)The Building Blocks of AI

First a little algorithm and AI 101. 

Let’s say an algorithm is a building. Data points and lines of code are like brick, mortar, and concrete–raw material used in different ways for different purposes. Some become apartment buildings. Some become museums. And, thankfully, some become Wendy’s restaurants. 

Artificial intelligence, then, is sort of like a city–a collection of different buildings, all designed to interact, depend on, and benefit from one another. Today–we’re going to talk a lot about algorithms, the buildings designed by people, which can accomplish extraordinary things–but can also cause harm in all sorts of ways. 

Deena: “When you would go wash your hands and you put your hand under the sink, would it work automatically?” Deena asked. 

Maxx: Yes, typically, yeah.

Deena McKay, a delivery consultant here at Kin + Carta, was talking with our producer Maxx (who is white). Maxx thought Deena might just be checking up on his COVID hygiene, but she was actually illustrating just how widespread this issue is, even with a fairly low-tech example:

Deena: “So, me being a person of color, it doesn’t work automatically. Sometimes I have to move my hand around. Or sometimes I have to maybe even go to an entirely different sink because of the way that these things were created. Was it with a diverse thought? And sometimes people who are Brown/Black minorities, our hands don’t automatically get recognized, even just for washing our hands, which is crazy because we obviously need to wash our hands.”

Yes, we do. And with that type of fundamental failure, it doesn’t take much to imagine how it could lead to much more severe consequences. As Deena explained, “If you have that concept of we can barely wash our hands, imagine what would happen if it was a self-driving car, and it didn’t recognize me walking across the street. It’s going to hit me.”

Deena is also the host of another podcast that we highly encourage you to check out called Black Tech Unplugged. It is an amazing podcast where Deena talks with other Black people currently working in tech to share their stories about how they got started and encourage other people of color to work in the tech industry. 

(04:12) Joy Buolamwini – The Coded Gaze

If you’ve heard anything recently about racial bias in AI, you may have heard about the remarkable work of Joy Buolamwini. In her own words, Joy is a poet of code who uses art and research to illuminate the social implications of artificial intelligence. Joy was working at the MIT Media Lab when she made a startling discovery. Joy explains, via a talk at the 2019 World Economic Forum: 

“I was working on a project that used computer vision, didn’t work on my face, until I did something. I pulled out a white mask,and then I was detected.”

In the talk, Joy shows a video of herself sitting in front of a computer vision system. In this system, white male faces are recognized immediately, but when she sits down, nothing–until she puts on an expressionless, seemingly plastic white mask. Joy set out to determine why this was happening, to uncover the biases within widely used facial recognition systems, and help build solutions to correct the issue. 

Joy’s story is the subject of a new documentary called Coded Bias, which premiered at the Sundance Film festival earlier this year. Joy is also the founder of the Algorithmic Justice League, an organization aiming to illuminate the social implications and dangers of artificial intelligence. As Joy says, if black faces are harder for AI to detect accurately, it means there’s a much higher chance they’ll be misidentified.

(05:32) Wrongfully Accused

Take the story of Robert Williams, a man from Detroit wrongfully accused at his home for a crime he didn’t commit. In a piece produced by the ACLU, Robert describes his conversation with police after he was first detained. 

“The detective turns over a picture and says, ‘That’s not you?’ I look, and I say ‘No, that’s not me.’ He turns another paper over and says ‘I guess that’s not you either.’ I pick that paper up and hold it next to my face, and I say ‘That’s not me. I hope you don’t think all Black people look alike.’ And he says, ‘The computer says it’s you.” 

It wasn’t. 

Although companies including Amazon and IBM have announced they are halting the development of facial recognition programs for police use, Robert’s story is, unfortunately, becoming all too common. 

However, the dangers of bias in AI aren’t always so easily seen and demonstrated. They’re not always as tangible as a computer seeing a white face, but not a Black face, or a soap dispenser recognizing white hands more than Black hands. 

One study found that a language processing algorithm was more likely to rate white names as “more pleasant” than Black names. 

In 2016, an algorithm judged a virtual beauty contest of over 600,000 applicants from around the world–and almost exclusively chose white finalists. 

There are well documented cases in healthcare, financial services, the justice system, the list goes on.

(06:58) How does bias in AI happen?

So how do these things happen? 

The most obvious place to start is with the data being fed into an algorithm. 

(07:05) Bad Data

For image recognition models–the algorithms used in things like soap dispensers or facial recognition software–if the data are being trained on mostly white faces or white hands, it’s going to learn to recognize white skin more easily. Because many of these systems were trained on such a disproportionate sample of white men, Joy gave the phenomenon a name: 

“I ran into a problem, a problem I call the pale male data issue. So, in machine learning, which includes techniques being used for computer vision–hence finding the pattern of the face–data is destiny. And right now if we look at many of the training sets or even the benchmarks by which we judge progress, we find that there’s an over-representation of men with 75 percent male for this National Benchmark from the US government, and 80 percent lighter-skinned individuals. So pale male data sets are destined to fail the rest of the world, which is why we have to be intentional about being inclusive.” – Joy Buolamwini 

In 2015, Amazon experienced a similar situation. Recruiters at Amazon had built an experimental AI model to help streamline the company’s search for top talent. The tool took thousands of candidates’ resumes, and would quickly identify top prospects, saving hiring managers countless hours. Even when the algorithm was designed to weigh gender neutrally, Amazon found it was heavily favoring men. 

Why? The benchmark for top talent was developed by observing patterns in resumes Amazon had received over the previous 10 years, which belonged to, you guessed it, mostly men. The system learned to penalize resumes containing words like “women’s” as in “women’s college” or “women’s debate team” because they weren’t phrases likely to show up in previous applicants’ resumes. 

(08:40) Diversity of Perspective

It really comes down to the fact that you need more multidisciplinary people making these decisions, “Twitter was invented by a bunch of white guys at a table, and they never thought of any problems that wouldn’t affect them as white guys.” – Max Young

That’s Max Young, a UX designer from the Kin + Carta UX team. Max says that often the simplest place to start is by looking at who is in the room. Deena agrees: “I would always like to see more people who look like me, in the workplace, doing tech work.”

If your algorithm is a mirror of humanity, you failed and your algorithm is biased.

Kyle Hundman – Data Science Manager, American Family Insurance

(09:39) Reinforcing Systemic Bias

There are also cases where algorithms that overlook broader systemic issues–like gender and racial inequality–can actually continue to reinforce them. To help explore this idea, we sat down with Kyle Hundman. Kyle leads a team at the Data Science and Analytics Lab at American Family Insurance.

“If your algorithm is a mirror of humanity, you failed and your algorithm is biased.” – Kyle Hundman, Data Science Manager, American Family Insurance

It really is the simplest way to understand it. AI isn’t really artificial intelligence. At Kin + Carta, we often prefer to think of it as augmented intelligence, because it’s not a computer thinking on its own. It’s a computer thinking the way we think, and behaving as we behave, which means it needs to be examined very carefully.

Take the story of COMPAS, an algorithm developed to evaluate the likelihood that a criminal will commit a crime again. A 2016 ProPublica study analyzed 10,000 defendants using the COMPAS system; their findings were clear: of all defendants who did not commit a crime over a two-year period, black defendants were twice as likely to be classified as higher risk than their white counterparts. The system had effectively learned to disproportionately evaluate Black defendants because it was mimicking the bias that we know exists in arrest records.

It’s also one of the reasons some are calling for an overhaul of credit reports as we know them in the US. The short of it is that, beginning in the 1930s, neighborhoods in many American cities were subject to “red-lining” policies, allowing mortgage lenders to label predominantly Black neighborhoods as “high-risk” areas, effectively denying Black residents access to credit for years. Even decades after those practices were outlawed, advocates point out even the simplest of data points can still lead to a disproportionate impact. Kyle helped illustrate one such example, as well as how important, yet still entangled, the conversation can be:

Kyle: “Just because of the use of location and anything that you’re doing that’s consumer facing, because you have all of these historical factors of discrimination and injustice in our country, and those often date back hundreds of years, and still manifest themselves today, it’s a really tricky question to ask, well, can location be a proxy for some of these historical injustices? How much is that still present today? How much does that matter in what we’re doing right now? And then how much of that is actually perpetuating some of those injustices? And that’s where the conversation gets really tricky and really deep.”

(12:14) Understanding the Bigger Picture

There’s clearly no easy solution, but one thing seems clear: the broader social context can’t be ignored when algorithms are making decisions about things like hiring, access to loans, or criminal sentencing.

Focusing on really narrow data sets and ignoring the backdrop of racial and gender inequality makes as much sense as summarizing 2020 by saying “Traffic jams were at an all time low.” Whether it’s true or not, you’re very much missing the bigger picture.

Which begs the question about education: for anyone in the tech world–designers, developers, data scientists–should AI skills and social understanding be considered inseparable? Like Laverne and Shirley? Bacon and eggs? Or being from Minnesota and saying “you betcha”?

Kyle Hundman: “I think it should be. And I think that it’s now more culturally relevant than it’s ever been before, and it’s getting a lot of attention rightfully so.“

Max Young: “when you get a bunch of engineers together and you say, ‘Come up with the system to figure out credit scores. Or maybe it’d be good to have a historian in there to say, “We’ve actually come across this problem before, let’s try to fix it rather than just maintain the situation.'”

Responsibility and Action

Kyle says one of the most powerful examples of multi-disciplinary teams could be in how companies are addressing diversity and inclusion.

Kyle: “We’ve seen recently diversity and inclusion departments pop-up in corporations. I think those will become technical, and I think you’ll have bias audits where you have technical people, that this is their focus, and they want to make sure that corporations are being responsible.”

We also asked Kyle about the responsibility of folks like him to uncover and uproot issues of algorithmic bias. He said that in many ways, it’s about better data science, and more accurate models, period.

Kyle Hundman: “I think it’s a healthy way to look at it as due diligence, and it should be core to any modeling exercise. I think there are a lot of situations where that’s actually beneficial to model development and that bias might actually hurt performance, where if you’re over sample and you have one class that’s over-represented, that’s a fundamental flaw in your data and you need to fix that. You want to fix that issue no matter what your task is or what your data looks like. I think, in a lot of situations, there’s empirical evidence of this in that fixing some of these biases issues actually improves your model and actually improves your accuracy.”

So with a system like COMPAS, how do we “fix it”?

We can’t really say, because COMPAS is a proprietary algorithm owned by its creators. However, this brings us to another key issue here: transparency.

(14:57) The “Black Box” Problem

“The Black Box.”–and no not the thing on a plane that holds all its juicy plane secrets. Fun fact–did you know that “black boxes” on planes are actually not black at all; they’re bright orange so they can be found more easily in the event of a crash?

In the case of AI, it’s still not something physical. But perhaps more “black” in its lack of visibility. We asked Kyle to help explain what the black box issue with deep learning is really all about.

Kyle Hundman: “Because the combinations are endless, you can’t really pinpoint how a single input moves through that network and interacts with all of these other features and lights up neurons partially or fully. There’s just so much depth and so much interaction throughout this whole thing. You can’t peel that apart.”

When we can’t peel it apart, how do we know how an algorithm is coming to an answer? And how do we know it’s being unbiased in arriving at that answer? In response to calls for more transparency, big tech firms have released a variety of different “tool kits” to help give a window into how AI systems work.

Earlier this year, Microsoft released its new “Fairlearn” tool kit for its machine learning platform on Azure, allowing anyone using the platform to test and hopefully prevent incidents of bias. LinkedIn released its Fairness Toolkit used to govern how AI recommends job listings, shared content, or potential job candidates. This type of transparency is at least a step in the right direction, right? 

(16:31) Who can hold companies accountable?

That’s what we asked Nicolas Kayser-Bril from AlgorithmWatch, a non-profit organization based in Berlin, Germany, that’s focused on research and advocacy about algorithms and their impact on society. Nicolas pointed out that transparency is important, but really only part of the equation:

Nicolas Kayser-Bril: “It’s of course, very important to look under the hood, but I wouldn’t say that transparency is the most important issue. The most important issue is enforcement. The problem is that we know there is a problem; we know which companies are the problem. I mean, when I as a journalist called the enforcement organizations, they’re like, ‘Oh, thank you very much we might look into it in five years.’ Because they have no funding, no expertise, and no political support to simply enforce the law. And no business in their right mind will ever be transparent to the point that they admit to breaking the law. This will never happen.”

So should algorithms be better regulated? Should the public and the government treat data and artificial intelligence like any other potentially dangerous commodity? Nicolas says the way we look at food service can be a helpful comparison. “When you go to the restaurant, you don’t ask to go to the kitchen in the name of transparency to look for yourself which bacteria are living there. You trust that the government sends hygiene inspectors to do it on your behalf.”

When you go to the restaurant, you don’t ask to go to the kitchen in the name of transparency to look for yourself, which bacteria are living there. You trust that the government sends hygiene inspectors to do it on your behalf.

Nicholas Kayser-Bril – Journalist, Algorithm Watch

(18:01) Rising to the Occasion

Another good example of a group that can cause great harm or good are doctors. What if we looked at medicine as an example of how to regulate AI and ensure that it meets ethical standards? Doctors are regulated privately by medical boards and publicly regulated by state licensing agencies. In this case, we need an industry group to set the standards for what tests AI should be subject to in order to validate its fairness. Those tools, like the Fairness Toolkit would be open source. State or federal law can mandate that the AI has to pass those tests. Ideally, the AI algorithm itself would be open source, but, until we can get companies to give up their intellectual property, passing a consistent set of black box tests would be better than nothing. Even now, you can work with the Algorithmic Justice League and request an algorithmic audit much in the same way we currently work with security firms to do a security audit.

The debate about regulating AI and algorithms will undoubtedly continue. The ethical questions are complicated, and, at least in the short run, it looks as though the responsibility will be up to the builders–the makers, and practitioners creating these systems–to be really deliberate in how we understand the impact of algorithmic bias, better hold ourselves accountable, and ultimately prove that AI can actually improve the human mind, rather than just imitate it. 

Because remember what Kyle told us: 

“if your algorithm is a mirror of humanity, you failed and your algorithm is biased.”

Speaking of groundbreaking feats of human achievement – it’s about that time. That’s right folks – it’s Cooler Terms with Pooler and Hermes. 

(19:34) Cooler Terms with Pooler and Hermes

Scott: Joining me as always is Katie Pooler and Katie, I just realized that I introduce myself every episode but you never have.

Katie: Truthfully, I needed a few episodes before I felt comfortable enough to formally attach my name and identity to the podcast. 

Scott: OK but hardly anyone listens to this podcast so I think you will be OK to introduce yourself

Katie: I am Katie Pooler, and, in addition to being our CFO – Chief Fun Office – I also work for our Connective Digital Services here at Kin + Carta. In fact, Connective Digital Services is the official Cooler Term for IT. We do a lot of things, but, essentially, I’m a solutions or systems engineer for the operations side of Kin + Carta. People come to me with problems, and it’s my job to find solutions. In fact, I think this is why you asked me for help with the podcast.

Scott: Thanks for not starting with ‘For those of you who don’t know me’ when introducing yourself. What is that about? Isn’t that what all introductions are for? For people who don’t know you?

Katie: For those of you who don’t know me, I am our president and CFO, I climbed Kilimanjaro, and I have immaculate credit and perfect work attendance. For those of you who do know me, don’t tell them I’m full of shit.

Scott: For those of you who don’t know me, how dare you? How dare you. I’m clearly someone you should already know.

Katie: You know who does know me Scott? The algorithm. It knows what I want, what I need, I assume it knows everything about me. So where is my algorithm-inspired soul mate? It’s 2020, we’re stuck inside, and we hate being on Zoom all day. I live alone. I have been so close to purchasing cardboard cutouts of celebrities just to add some variety to my social life. The algorithm knows what I want before I want it, why not use those powers for good? 

Scott: What would you like to be able to do with it?

Katie: I wish we were able to use our unconscious biases more effectively, and not to discriminate against race, gender, abilities. What am I talking about? I’m talking about using algorithms to determine whether a person is likely to microwave fish in the break room, take off their shoes on an airplane, or do they watch Big Bang Theory? 

Scott: I would pay cash money for that service. 

Katie: Seriously though, that show is awful. 

Scott: I don’t need an unbiased algorithm to tell me that.

Scott: Thanks for tuning in. Let us know what you think of the podcast and if you have any ideas for future episodes. Reach out to us on Twitter, Facebook, LinkedIn, and Instagram or just dream it to us on the astral plane. We are everywhere.

What if we told you the tiny European nation of Estonia has been voting online since 2005? In this episode, we take a closer look at the way the Estonian system works, the biggest obstacles to online voting in the US, and what it might take to transform one of the most important functions of our society.

Featuring:

Florian Marcus, Digital Transformation Advisor, e-Estonia Briefing Center
Ashby Fiser, CEO of Aviame
Mark Ardito, VP of Cloud Modernization, Kin + Carta

Key Takeaways:

  • The Estonian voting system is just part of the country’s commitment to being the “Most Digitally Advanced Society in the World.”
  • Estonia’s digital identity system lies at the heart of its digital capabilities.
  • Ballot security, infrastructure, and ensuring voter confidence remain the three biggest hurdles to the United States in terms of building a reliable and secure voting system.
  • Most experts still believe that paper ballots should be the gold standard for traceable, auditable voting.
  • Collaboration, experimentation, and long-term perspectives are key to creating meaningful change to election technology.

Show Notes 

(00:40) Why Can’t We Vote Online?

We’ve grown accustomed to sharing such vast amounts of information digitally. Many of the transactions we conduct online every day would have terrified us just a handful of years ago. Banking from your phone, applying for loans, managing credit cards, applying for jobs, renting out your home to strangers, and filing taxes.

So why can’t our voting systems work in the same way?

If this question has ever run through your head, you’re not alone. No surprise, there’s a lot to it. Many people will say it’s next to impossible –at least in the US–in the near future. There are many ways to look at it, and in just about every conversation about online voting, eventually, one country comes up: Estonia.

(01:24) Examining Estonia

With a population the size of Philadelphia, Estonia is known for its vast wilderness, black rye bread, having absolutely no one famous born there, (go look up “Famous Estonians” and you will see what I mean. No one you have ever heard of. No offense Estonia but you need to pick up your PR game), and the option for every citizen to vote online since 2005. 

Today, we’re going to talk about how Estonia built its current system as well as the most significant obstacles preventing the United States from doing something similar. 

We’ll also discuss whether the real question should be “Should we want to vote online?,” rather than “Why can’t we vote online?”

(02:06) Estonia’s Digital Society

Bordering Latvia to the south, Russia to the East, and the Baltic Sea to the north and west, Estonia has become known as “The Most Digitally Advanced Society in the World.” In fact, 99 percent of all public services are available online: driver’s licenses applications, obtaining permits, paying taxes, opening a business, and yes –voting, all happens through one digital tool. 

Since the voting system was first introduced in 2005, the country’s acceptance of it has only grown stronger. In fact, it has flourished. No major vote recounts, no hacking scandals, and in the most recent election, 46.7 percent of all votes were cast online, bringing down the cost per vote by an estimated 50 percent. 

Is it a glimpse into the future about how governments will operate? Is it something that only works on a small scale? Or are the threats too extraordinary, too uncertain, and potentially catastrophic, that it should be avoided like the plague?

(03:04) Keeping Ballots Secure

Ballot security is far and away the number one issue plaguing the voting process. Keeping who you voted for a secret so that no one can coerce you into voting for a candidate is imperative because secrecy keeps the coercer from knowing if you were compliant. 

The anonymity of voting is also one of the simplest ways to understand the differences between things like financial transactions and voting. Fraud protection systems that help make online banking and tax filing possible depend specifically on linking your activity to your identity. However, in voting, that connection is completely severed, so the technical challenge is turned on its head. There’s also the question of motivation, and the differences between the government wanting your money and wanting you to vote, but we’ll put those questions aside for now. 

(03:49) How Does Estonia Protect Anonymity?

So as we play our game of “Keeping up with the Estonians” We wonder how they keep every ballot a secret? It helps to look at the system as a whole. According to Anna Piperal, “The central idea behind this development is transformation of the state role and digitalization of trust. Think about it. In most countries, people don’t trust their governments. And the governments don’t trust them back. And all the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated.” (Anna Piperal, TED talk, 2:46)

(04:37) The Digital Identity system

When Estonian officials talk about their digital society, they describe three design principles that have guided it since its early development in the 90s. The first is to guarantee privacy and confidentiality. At the center of the technology is the digital identity system, and a digital ID card. Every citizen is issued a digital identity that must be verified before any services can be accessed. We spoke with Florian Marcus, a digital transformation advisor at the e-Estonia Briefing Center, who showed us just how simple it is to vote in Estonia.

If you’re like me, you’re thinking, “End of story? But I have so many questions.”

Estonians says those two pins are what prevent someone from being able to vote fraudulently if they had your digital ID card. But again – what about keeping
my vote anonymous? Florian Marcus stated that: “Encryption effectively means that you can see in the source code how we encrypt our stuff, but to decrypt it, you don’t need to have found a particular line in the code. Instead, you would need a lot of brute computing power to decrypt that key. And the truth is that the encryption that we use these days would take all the different supercomputers in the world combined several years just to crack one sort of transaction”.

“The actual process of voting takes around 20 seconds. It effectively takes as long as you need to decide who you want to vote for.”

Florian Marcus – Digital Transformation Advisor, e-Estonia Briefing Center

 (06:08) The Importance of Transparency

Florian was also quick to point out that the entire system’s source code is available on GitHub, meaning IT nerds like myself can take a look ourselves and point out its flaws.

Many experts working on this very problem, in the United States, the United Kingdom, and other European countries, seem to agree that this type of encryption is still not enough still for a country the size of the United States.

There is a type of encryption called homomorphic encryption that could be the solution. I won’t explain the math, because I don’t understand it but people who are much better at math than I am discovered a type of encryption that allows you to perform operations on encrypted data without decrypting it, which still keeps the calculations encrypted. Josh Benaloh at Microsoft has helped develop voting software called Microsoft Election Guard that leverages this encryption to count votes. This allows them to obtain an accurate tally of the total number of votes without seeing who you voted for. Wow. Mom was right. Math is cool.

(07:09) Infrastructure and Innovation

The second major issue is infrastructure. In other words, the systems, servers, software, and networks that online voting would depend on. 

The key to the Estonian infrastructure is a data exchange platform called the X-Road. Anna Piperal explained: “Just like a highway, it connects public sector databases and registries, local municipalities and businesses, organizing a real-time, secure, and regulated data exchange, saving an auditable trace after each move.”

(07:51) The “Only Once” Principle

This brings us to the second design principle: Only Once.

Each piece of information is entered only once.

Permits, licenses, leases, contracts, basic medical info–think about how many times you’ve had to re-submit the same piece of information to multiple entities at your local government? I’ve lost track. In the back of my mind, I’m always thinking, “Shouldn’t you have this information already?”

The X-Road data exchange system is just a part of the infrastructure that makes online voting possible, but it illustrates a key point. It’s part of a much more robust approach to digital services. The lack of such an approach in the United States has been underscored by the efforts from several organizations trying to bridge the gap.

(08:50) The United States’ Digital Infrastructure Problem

Organizations like OmniBallot, Voatz, and DemocracyLive have been advocating for online voting as wells as for systems already in place to allow for members of the military and certain citizens overseas to vote online. However, several studies, including studies from the University of Michigan and MIT, have cautioned that the existing online voting systems are rife with vulnerabilities and security issues.

It seems that what’s been built so far, although well intentioned, isn’t supported by a strong enough foundation. It’s attempting to build a service that to work properly, depends on a bigger system that simply doesn’t exist.

It’s a bit like debating where to hang artwork in your home, finally agreeing that, yes, it does tie the room together above the fireplace, when you suddenly remember you don’t actually have any walls. Or floors. And you’re really just standing in an open field with a giant “Live Laugh Love” poster and nowhere to put it.

To make it even more difficult, in the United States, each of the states and the District of Columbia are independently responsible for voting and then the actual mechanics of voting occurs at the county level. Try getting all of those parties to agree on a single shared infrastructure for voting when we can’t even agree on how to pronounce pecan (pee-KAHN) , caramel (car-a-mel), or crayon (crayn). 

ANYWAY–Foundations matter. Infrastructure matters. Okay moving on–the third major issue at hand. Voter Confidence.

(10:27) Ensuring Voter Confidence

Estonian leaders put trust at the forefront of their entire system, voting very much included.

The third principle at the heart of the Estonian System? Only YOU have access to the data.

In terms of the voting process, this idea of ownership takes shape in a few ways. First is the structure of elections themselves. Elections are scheduled for ten days. The first seven days are digital only. You can change your vote as many times as you want, and only the last vote is counted.

The issue of coercion comes up a lot here, and Florian said they’re often asked, “What’s to prevent someone from breaking into my home and forcing me to vote for a particular candidate?”

According to Florian, “Yes, somebody could break into my house and force me to vote for a particular candidate, but I’ve got seven more days to change my vote. And obviously that way, it’s very hard to leverage a meaningful part of the population. And even if somebody broke into my house, I don’t know, on the last online voting day at 23:59, just before midnight and would force me to cast my final vote in favor of some other candidates, the paper vote that happens afterwards, overrides any electronic vote. That is still another safeguard that we have for i-Voting.”

(12:15) Trust, Audits, and Accountability

Maintaining voter confidence and a trustworthy system brings up another critical process: audits.

Experts will often point to paper ballots as still the gold standard for an auditable, tangible way to ensure the accuracy of an election. Florian insists they’re able to maintain the same type of trace that’s counted with paper ballots.

We didn’t get into the weeds about the audit process with Florian, but cybersecurity experts around the world, including MIT’s Ron Rivest, have continued to urge government officials to adopt paper-based risk-limiting audit systems, rather than any online voting.

In 2019, Microsoft announced ElectionGuard, an open-source software development kit designed to help make voting, audits and security more efficient. Microsoft has also been quick to emphasize that the technology is NOT designed to support online voting. 

(12:23) What about the Blockchain?

So where could the United States Start?

In Estonia, we have a digital government built from the ground up. And everytime a conversation starts about building trust in our voting process, the conversation usually circles back to the question “What about the blockchain? Isn’t the blockchain designed to solve this problem?”

Although the call for blockchain to solve all of our problems was more popular in 2018, it does seem to make sense in this case, a decentralized public ledger to help ensure security of information. What we learned is yes – blockchain technology plays a critical role in the Estonian system as a whole, but not the actual voting process itself.

“Overall the United States has overwhelmingly under-invested into these systems and we are paying the price for it now. We have a massive amount of work to do from an IT standpoint.”

Mark Ardito – VP of Cloud Modernization, Kin + Carta

(15:15) Modernizing the Voting System

Again – the foundation is everything. We spoke with Mark Ardito, Kin + Carta’s VP of Modernization, to get his perspective. Mark has spent his career helping big global businesses break free from old, sluggish technology and move to modern, agile ways of doing things. According to Mark, 

“In the United States, we have an enormous amount of technical debt. What that means is we have not invested into computer systems in our government agencies for decades. We see sporadic pockets of investment, but nothing of substance. We have federal systems and then state run systems. All have varying degrees of digital capabilities. Heck, we had the governor of NJ tweet back in April amid the outbreak of COVID that he desperately needed 6 COBOL developers. The IT systems in NJ are over 40 years old and still running COBOL.”  (ARDITO, CLIP 1) 

“Talk about a lack of investment. Overall the United States has overwhelmingly under-invested into these systems and we are paying the price for it now. We have a massive amount of work to do from an IT standpoint.” (ARDITO, CLIP 1, continued)

(16:54) Building toward a Digital Identity System

Many say a secure universal ID system would create the foundation we need. Ashby Fiser is a UX expert and technologist, working at the intersection of politics and technology. She says it would be a good start but could be best suited if it was taken out of the government’s hands.

(17:30) Collaboration is Critical

The relationship between the tech community and the government is critical, complicated, and if you’ve ever seen clips of tech leaders like Mark Zuckerberg and Jeff Bezos explaining the internet to Senate committees, there’s ground to be covered.

Going back to the original question: why can’t we vote online? Some would say that we shouldn’t vote online. That the danger is too high.

Others argue it’s actually the best way to fight those threats…but that we simply aren’t trying to build a robust system that could actually support it and revolutionize the way we vote. Are we being shortsighted?

(18:26) The Long Term View

Ashby thinks perspective is everything,
 
“I think a lot of people have a really short-term viewpoint of things and you really have to look. In politics, one of the first things … I met with a guy who had been Obama’s CTO when I first started in this field. One of the first things he told me is you’re not going to get something done in a year. You’re not going to get something done in two years. You’re going to get one major thing done in your political career.

“I sat with that for a really long time and one of the major things I want to do in my political career, whatever that looks like, is to fix voting. If it takes me … I’m 40 right now. If it takes me till I’m 70 to have a universal voting system, I am going to be okay with that. That’s just not a perspective a lot of technologists are willing to take. “

There are many reasons why so many of us need a vacation right now more than ever. And that’s because COVID-19 has changed how we work, and that also introduced a new term that we all became very familiar with: “Zoom Fatigue.”

Those lucky enough to work remotely during the pandemic, I am sure you have experienced seven consecutive video meetings, followed by a “virtual happy hour” with friends, and then spent an hour with family as they try to figure out how not to talk over each other on FaceTime…the pure exhaustion of “Zoom fatigue” is all too familiar an idea now.

The CEO and Founder of Zoom has rejected most in-person meetings and explained his rationale like this: “Why would I leave my office, or leave the country to do an hour meeting when I can do the same meeting over a Zoom?”

But, that’s cost bigger mental health issues because we are all on Zoom, all day, every day.

Here is a summary of the episode, and you’ll find the full-transcript below if you prefer browsing while listening.

Mental Health and Virtual Connections

  • Humans are social creatures and communicators at heart. Research shows just how much we communicate nonverbally; body language, facial expressions, eye contact, little tiny things that we’re able to see, and decode, consciously or unconsciously.
  • Put everyone into little postage stamps on your screen, and those cues are mostly gone. But your brain is still trying to seek out that same information, so it goes into gathering mode. Your brain quickly becomes overstimulated, trying to pay attention to so many different things, that it doesn’t really focus on any one thing particularly well.
  • All of this leads to one thing, which many of us are experiencing right now: Exhaustion. Not the same “I’ve worked too much this week exhaustion, either.” It’s the “I’m not getting enough done, and having the right social interactions to actually enjoy my week and feel like I’m making progress”.’’’
  • According to a recent NY Times article, dentists are getting increasingly more appointments from patients who have cracked their teeth from grinding them together. They are calling it an “epidemic of broken teeth.”

How Tech Can Help

  • Can technology also create the “randomness of relationship building” that we experience when we’re in the office?
  • How can you recreate those spontaneous interactions that make your office your office?
  • Employee recognition can stimulate conversations and a sense of belonging in a remote workforce.
  • Kin + Carta experimented using blockchain technology to create a “digital high-five” system. Within a year, it had 25,000 transactions

Show Notes 

(02:41) How Is Social Distancing Affecting Our Mental Health? 

In this episode, we discuss how therapists and business leaders have approached this challenge. Of connecting with people. Of substituting genuine human interactions. Of practicing empathy, vulnerability, and trust as best we can given the circumstances, with the tools at our disposal.

In addition, we talk about how tech can help us form creative solutions – including mental wellness apps, blockchain-based employee recognition systems, and how to rethink our relationship with the tech we’re growing all-too-familiar with.

(03:10) Mental Health and Virtual Connections

When the founder of Zoom, Eric S. Yuan was in the early stages of creating and growing Zoom, he actually refused to do in-person meetings. “Why would I leave my office, or leave the country to do an hour meeting, when I can do the same meeting over a Zoom?” he stated.

But suddenly during the pandemic, we turned to video conferencing apps like Zoom, Skype, and FaceTime to make us feel like we still had social lives. Virtual game nights. Trivia Nights. Happy hours. Boozy brunches.

(04:40) The science of Zoom Fatigue

Humans are social creatures and communicators at heart. Research shows just how much we communicate non-verbally; body language, facial expressions, little tiny things that we’re able to see, and decode, consciously or unconsciously.

According to experts, people tend to over “perform” while on Zoom calls, because they’re able to constantly monitor their expressions, how emotive they’re being, their posture. While in conversation with Alice Boyes, a former clinical psychologist and author of The Healthy Mind Toolkit, she said, “Especially with Zoom, you do a lot of monitoring, we know from studies of social anxiety that people with social anxiety do a lot of internal monitoring.”

All of this leads to one thing, which many of us are experiencing right now: Exhaustion.

“With Zoom, you do a lot of monitoring of yourself, and we know that from studies of social anxiety, that people with social anxiety do a lot of internal monitoring.”

Alice Boyes – psychologist and author of “The Healthy Mind Toolkit.”

(06:25) Beyond Exhaustion

So, what can we do about this different type of exhaustion that we’re facing?

What happens when you’re more than just “tired?” What happens when you’re beyond exhausted to the point where you literally can’t participate in another zoom meeting without zoning completely out. John O’Duinn, the author of a book called “Distributed teams: The Art and Practice of Working Together While Physically Apart” explained, “You had a particular way of interacting with people in meetings. You had exercise by walking around the building or walking up for lunch. Now you stay at home and you hop from one video call to another and you’re still in the same chair.” 

The impact is not just on our mental health. According to a recent NY Times article, dentists are getting increasingly more appointments from patients who have cracked their teeth from grinding them together.

Peter Jackson, CEO at Bluescape suggested employers need to address their employees’ Zoom Fatigue: “You have to step back and look at it from the standpoint of look you’re responsible for not just these people, but their spouses… their children, their lifestyle.” 

(08:28) Relying on therapy

Virtual therapy is growing in popularity. There are many platforms out there, such as TalkSpace and BetterHealth, where you can contact a virtual therapist. 

The first reason is the perception that therapy is a good thing for normal people to do is helping the industry grow. You don’t have to hide that you have a therapist anymore. the other reason is that it’s never been easier to see a therapist because of the new tools and platforms that exist now because of the acceleration of digital tools during the pandemic. You no longer have to take time off to leave your office and see a therapist. 

Surprisingly, some studies have shown that online therapy is even more effective than in-person therapy. A July 2020 study from McMaster University in Toronto, conducted seventeen randomized control trials comparing therapist-supported cognitive behavioral therapy delivered electronically to face to face cognitive behavioral therapy. The researchers found that online therapy improved patients’ symptoms better than face to face did. 

You have to step back and look at it from the standpoint of ‘Look you’re responsible for not just these people, but their spouses… their children, their lifestyle’.

Peter Jackson – CEO at Bluescape

(10:17) Longing for Random Physical Connections

But, what happens to our attitudes about work when we’re not getting that real direct human interaction? With all of us being virtual, it can really impact our perceptions of working being done.

Chris Weiland, Director of Kin + Carta Americas Labs explained “I’m on a Zoom, I’m chatting with my team. We’re all working. Whereas at the office, if you get up from your desk and walk to the washroom or walk to the cafe, you’re not working, you’re not typing. You’re not at your desk. You’re not connected.” So, can technology also create the “randomness of relationship building” that we experience when we’re in the office?

We’re all learning the new way to work at the same time. Try stepping away from the computer for your next meeting. Dial in to Zoom from your phone as you walk around the neighborhood.

There is also some great advice from psychologists Rachel and Stephen Kaplan. They have shown that mental fatigue can be treated via Attention Restoration.

You can undo the fatigue of directed attention, a.ka. Zoom fatigue by spending time in an environment that has the following qualities:

  • Being Away: A place where you are not being forced to pay attention to tiny images of humans
  • Soft Fascination: A place that’s of interest to you but allows you to be in it and be reflective.
  • Extent: The environment is somewhat familiar.
  • Compatibility: It is a place that you choose to be in and are not forced to be there.

The Kaplans have looked at the restorative power of spending time in nature which definitely matches all of these criteria. And this is more than just their opinion. There was a study done in 1991, where they compared how three groups of people did tasks that required a lot of attention. One group did not get to go on vacation. One group went on vacation in an urban area and the third group went to a rural area. All groups were tested before and after. All groups were tested before and after. The control group’s performance declined. Not surprising. The urban vacation group’s performance also declined. Do not tell the NYC board of tourism this. The urban vacation group’s performance was the only one that improved. Amazingly enough even just taking time to look at pictures of nature or pictures or art can help restore your attention. So, schedule a meeting with nature and DO NOT attend via Zoom.

Finally, it is important that you continue to make human connections, real, human connections that involve your co-workers but have nothing to do with work. John O’Duinn gave us a great tip, “It’s important to intentionally have time to socially chit chat with others. So every day, intentionally make 10, 15 minutes, and just have an impromptu coffee with somebody. It’s not a lot of time, you don’t have an agenda.”

“It’s important to intentionally have time to socially chit chat with others. So every day, intentionally make 10, 15 minutes, and just have an impromptu coffee with somebody. It’s not a lot of time, you don’t have an agenda.”

John O’Duinn – author of “Distributed teams, The Art and Practice of Working Together While Physically Apart”.

Contact tracing apps were supposed to help the world minimize the spread of COVID-19, and although the idea had a lot of promise, in reality, it fell short of expectations. In this episode, we dive deep into the role of technology and COVID-19, why contact tracing apps haven’t lived up to the hype, and what it would take to introduce something that… you know… works?

Contact Tracing 101

  • The premise is simple: Your phone notifies you when someone near you has had contact with someone who had COVID-19, or notifies you if you’ve been around someone who had COVID-19.
  • Experts are referring to the apps/technology as exposure notification apps to make a point that it is benefiting the users, not the trackers.

The problem: widespread adoption

  • Google and Apple formed a rare, collaborative partnership to create “Exposure Notifications” API based on Low Energy Bluetooth, keeping all data exchanged anonymous and limited to only what’s necessary.
  • Early versions of apps in states like Rhode Island, Utah, North Dakota, and South Dakota actually DID use GPS tracking, which sparked immediate privacy concerns, and the apps were pulled back.
  • An Oxford study concluded that 60% of a country’s population would need to use contact tracing technology in order for it to be effective. But many experts disagree and believe that any adoption could help mitigate risk.
  • It comes down to the user experience, and right now, “it just works” isn’t it. Getting someone to download it is the first step. Getting someone to use it is the second step. And anyone that’s deployed an application knows just how hard it is to do both of those things.

Getting more adoption:

  • What other options or experiences can we use to get more people to engage with contact tracing applications?
  • We have to prove that it works in order for adoption to happen. Right now, there is no social proof that contact tracing is working and minimizing the spread of COVID-19.
  • What if we paid Americans $1,000 to download a contact tracing app?

Show Notes

(01:15) Introducing Contact Tracing Apps

After a person tests positive for COVID-19, contact tracing is the process by which health officials try to track down who that person has had close contact with in order to warn those people and hopefully contain further spread.

(02:05) How do the apps work?

Most of them generally work like this: You have a friend over in your backyard for a socially-distanced drink after work, your phones recognize they’re close to one another and exchange some encrypted information via Bluetooth, which is added to an anonymous log, simply noting that the phones were close.

(03:04) Language Matters.

Exposure Notification vs. “Contact Tracing”. Experts suggested we should refer to the technology as “Exposure Notifications” instead of contact tracing. Contact tracing often sends the wrong message—it sounds like the technology is, well, tracing you, tracking your movements. It suggests the benefit is geared toward those doing the tracking and tracing—the public health officials. Exposure Notifications, however, is about notifying YOU if you’ve been around someone who had COVID.

“There’s no way to track somebody back to who they are based on the Bluetooth data that’s being shared.”

Jenny Wanger – Head of Implementor’s Forum at the Linux Foundation Public Health (LFPH)

(05:23) In collaboration with Google & Apple

In a rare collaborative effort, the two tech giants created their “Exposure Notifications” API, using only Low Energy Bluetooth, keeping all data exchanged anonymous and limited to only what’s necessary. “It turns out that what they chose for their protocol was very, very similar to what organizations like TCN Coalition were trying to do with their protocols,” explained Jenny Wanger, Head of Implementor’s Forum at the Linux Foundation Public Health (LFPH).

(08:46) The Oxford Study controversy

While talking with Jenny, we brought up a stat from an Oxford University study we’d seen cited over and over in doing research for this piece. According to sources all over the internet, this Oxford study concluded that 60% of a country’s population would need to use contact tracing technology in order for it to be effective. “This is an often misquoted statistic,” she said and explained that 60% adoption would be needed if Exposure Notification technology was going to be the only technology used in order to prevent COVID from spreading. (find out more in her blog article)

(11:06) Trust & Privacy at heart

Since the beginning, the issue of privacy and trust has been at the heart of the conversation. According to Jenny and the LFPH–confusion in the media and the public is largely drawn back to very early “contact tracing apps” that actually were using GPS tracking. But, she clarified: “There’s no way to track somebody back to who they are based on the Bluetooth data that’s being shared.”

For those who maintain privacy concerns, the importance of open source development is at the top of the priority list. This is what Google and Apple have done—the source code for the exposure notifications app is available on GitHub. 

So if privacy isn’t really the problem, what is?

“I think that we need to put a lot of intentional effort into publicizing the wins”

Ellie Daw – Senior Researcher & Working Group Lead at TCN Coalition

(15:43) Changing human behavior 

One major issue is friction. In the U.S., identifying the right app, downloading it, understanding how it works, and enabling your phone’s ability to allow it to work… it’s too much. “This is another thing that has to be prioritized and is competing with the other priorities of the average main street individual during this pandemic,” suggested Paul Heckel, VP of Experience at Kin + Carta

(18:05) Prove it works

One simple area that researchers and designers like Jenny and Ellie are longing for—simple stories showing that the tech actually works. “I think that one of the biggest elements of getting over that hurdle is going to be let’s make sure that the people hear about the ways that it is working, and let’s just really celebrate those wins.”

(19:10) Focus on Small Community Adoption

Some are suggesting that public health officials and developers should look to how companies like Facebook, Uber, and WhatsApp gained traction before they were the behemoths they are today: Targeting local, highly-focused communities where they’d be of immediate use, and then scaling up. Paul actually brought up a study that PR firm Edelman puts out called the Trust Barometer. In 2019, “My employer” was the one that outscored everybody else on the trust index. 

(20:02) Incentivize

“Pay Every American $1,000 to Download a Contact Tracing Application”—that’s the headline from a Slate.com article written in August by Zachary Kellenborn, a national security consultant. While the logic has an appeal, critics have been quick to raise the difference between downloading an app and actually using it properly. 

As we talked further, Paul from the Kin + Carta team suggested a different way to think about exactly that problem—incentivizing the behavior itself, and actually forgoing the download altogether, by integrating it into the “health” apps already on our phones. “I think that’s probably a better pathway to getting these things done, rather than certainly municipal or state engineered and delivered contact tracing apps, but I don’t think that’s ever going to work.

“It’s not just about developing a viral loop and making sure that you’ve got app store optimization and throwing up some Google ads and then working on advertising on podcasts. We’re really looking at it as a public health intervention.”

Jenny Wanger – Head of Implementor’s Forum at the Linux Foundation Public Health (LFPH)

(22:57) A public health issue

Jenny Wanger explained it’s critical to follow models focused on public health, not just digital user experience: “It’s not just about developing a viral loop and making sure that you’ve got app store optimization and throwing up some Google ads and then working on advertising on podcasts. We’re really looking at it as a public health intervention.”

(23:42) Narrow the options

This brings up an entirely different perspective, and we acknowledge the delicate dance we’re doing around the politics of this topic, but in matters of public health intervention, the argument would be, in Paul’s words: “Don’t give them a choice.”

As Paul explained, in the U.S., there’s a pretty good example of the non-elective type approach already installed on every smartphone, one we happened to experience not too long ago when the “Derecho” storm brought a rare tornado warning. “The wireless emergency alert service,” he said ”There’s no friction. There’s no opt-in. It’s just mandatory.”


Certainly a big leap for anyone already concerned about privacy and access, but an interesting way to look at how certain trade-offs between public health and privacy really aren’t that controversial at all. What if they all rolled out an Exposure Notifications feature? We have the greatest marketing machine known to man. Put it into service for the greater good. We got GM to make masks, let’s get Silicon Valley out there to save lives.