Noticias

Con AWS Inferentia, Amazon desembarca también en el mercado de los chips de inteligencia artificial

Amazon ha aprovechado la conferencia AWS Re:Invent que se clausura hoy en Las Vegas para anunciar el lanzamiento de los chips Inferentia, sus propia alternativa para proyectos de aprendizaje automático, con los que pretenden proporcionar a los investigadores “alto rendimiento a bajo costo” a partir de finales de 2019.

Los Inferentia saltarán al mercado siendo compatibles con frameworks populares como INT8, FP16, TensorFlow, Caffe2 y ONNX. Y, siendo un producto Amazon, también soportará datos procedentes de productos populares de AWS como EC2, SageMaker o el nuevo Elastic Inference Engine, anunciado esta misma semana.

Amazon desembarca en su enésimo nuevo mercado

La compañía fundada por Jeff Bezos es un gigante que a veces pareciera no resignarse a dejar ningún mercado en manos de otras compañías. En 2021, el mercado de los chips de inteligencia artificial estará valorado en 11.800 millones de dólares, según los analistas de Morningstar.

Sin embargo, al contrario que Intel o Nvidia, actuales líderes en este campo, Amazon no venderá sus chips físicos, sino el tiempo de uso de los mismos a través de servicios cloud (siguiendo un modelo muy similar al de AWS).

Esta diferencia de modelo no significa que no vaya a repercutir negativamente sobre las dos empresas líderes: siendo Amazon uno de los mayores clientes de ambas, una apuesta clara por el uso de sus propios chips Inferentia podría traducirse en grandes pérdidas para aquéllas.

La estrategia de Amazon no difiere demasiado de la ya sostenida por Google con el lanzamiento el pasado verano de su chip Google TPU (siglas de ‘Unidad de Procesamiento Tensor’). Holger Mueller, analista de Constellation Research, afirmó en declaraciones a TechCrunch que, si bien la infraestructura de los Inferentia aún va 2-3 años por detrás de los TPU, este movimiento de Amazon es muy relevante:

“La velocidad y el coste de ejecutar operaciones de aprendizaje automático constituyen un diferenciador competitivo para las empresas [pues determinará si] las empresas (y las naciones cuando piensen en la guerra) son un éxito o un fracaso. Esa velocidad sólo se puede lograr con hardware personalizado, e Inferentia es el primer paso de AWS para ingresar a este juego”.

Fuente: www.xataka.com

A.I. took a test to detect lung cancer. It got an A.

Artificial intelligence may help doctors make more accurate readings of CT scans used to screen for lung cancer.

Computers were as good or better than doctors at detecting tiny lung cancers on CT scans, in a study by researchers from Google and several medical centers.

The technology is a work in progress, not ready for widespread use, but the new report, published Monday in the journal Nature Medicine, offers a glimpse of the future of artificial intelligence in medicine.

One of the most promising areas is recognizing patterns and interpreting images — the same skills that humans use to read microscope slides, X-rays, M.R.I.s and other medical scans.

By feeding huge amounts of data from medical imaging into systems called artificial neural networks, researchers can train computers to recognize patterns linked to a specific condition, like pneumonia, cancer or a wrist fracture that would be hard for a person to see. The system follows an algorithm, or set of instructions, and learns as it goes. The more data it receives, the better it becomes at interpretation.

The process, known as deep learning, is already being used in many applications, like enabling computers to understand speech and identify objects so that a self-driving car will recognize a stop sign and distinguish a pedestrian from a telephone pole. In medicine, Google has already created systems to help pathologists read microscope slides to diagnose cancer, and to help ophthalmologists detect eye disease in people with diabetes.

“We have some of the biggest computers in the world,” said Dr. Daniel Tse, a project manager at Google and an author of the journal article. “We started wanting to push the boundaries of basic science to find interesting and cool applications to work on.”

In the new study, the researchers applied artificial intelligence to CT scans used to screen people for lung cancer, which caused 160,000 deaths in the United States last year, and 1.7 million worldwide. The scans are recommended for people at high risk because of a long history of smoking.

Studies have found that screening can reduce the risk of dying from lung cancer. In addition to finding definite cancers, the scans can also identify spots that might later become cancer, so that radiologists can sort patients into risk groups and decide whether they need biopsies or more frequent follow-up scans to keep track of the suspect regions.

But the test has pitfalls: It can miss tumors, or mistake benign spots for malignancies and push patients into invasive, risky procedures like lung biopsies or surgery. And radiologists looking at the same scan may have different opinions about it.

The researchers thought computers might do better. They created a neural network, with multiple layers of processing, and trained it by giving it many CT scans from patients whose diagnoses were known: Some had lung cancer, some did not and some had nodules that later turned cancerous.

Then, they began to test its diagnostic skill.

“The whole experimentation process is like a student in school,” Dr. Tse said. “We’re using a large data set for training, giving it lessons and pop quizzes so it can begin to learn for itself what is cancer, and what will or will not be cancer in the future. We gave it a final exam on data it’s never seen after we spent a lot of time training, and the result we saw on final exam — it got an A.”

Tested against 6,716 cases with known diagnoses, the system was 94 percent accurate. Pitted against six expert radiologists, when no prior scan was available, the deep learning model beat the doctors: It had fewer false positives and false negatives. When an earlier scan was available, the system and the doctors were neck and neck.

The ability to process vast amounts of data may make it possible for artificial intelligence to recognize subtle patterns that humans simply cannot see.

“It may start out as something we can’t see, but that may open up new lines of inquiry,” said Dr. Mozziyar Etemadi, a research assistant professor of anesthesiology at Northwestern University Feinberg School of Medicine, and an author of the study.

Dr. Eric Topol, director of the Scripps Research Translational Institute in La Jolla, Calif., who has written extensively about artificial intelligence in medicine, said, “I’m pretty confident that what they’ve found is going to be useful, but it’s got to be proven.” Dr. Topol was not involved in the study.

Given the high rate of false positives and false negatives on the lung scans as currently performed, he said, “Lung CT for smokers, it’s so bad that it’s hard to make it worse.”

Asked if artificial intelligence would put radiologists out of business, Dr. Topol said, “Gosh, no!”

The idea is to help doctors, not replace them.

“It will make their lives easier,” he said. “Across the board, there’s a 30 percent rate of false negatives, things missed. It shouldn’t be hard to bring that number down.”

There are potential hazards, though. A radiologist who misreads a scan may harm one patient, but a flawed A.I. system in widespread use could injure many, Dr. Topol warned. Before they are unleashed on the public, he said, the systems should be studied rigorously, with the results published in peer-reviewed journals, and tested in the real world to make sure they work as well there as they did in the lab.

And even if they pass those tests, they still have to be monitored to detect hacking or software glitches, he said.

Shravya Shetty, a software engineer at Google and an author of the study, said, “How do you present the results in a way that builds trust with radiologists?” The answer, she said, will be to “show them what’s under the hood.”

Another issue is: If an A.I. system is approved by the F.D.A., and then, as expected, keeps changing with experience and the processing of more data, will its maker need to apply for approval again? If so, how often?

The lung-screening neural network is not ready for the clinic yet.

“We are collaborating with institutions around the world to get a sense of how the technology can be implemented into clinical practice in a productive way,” Dr. Tse said. “We don’t want to get ahead of ourselves.”

Source: www.nytimes.com

Making futures tangible

A Deep Dive with Futurist Stuart Candy

What is your preferred future? Who shapes the future? How might we create more equitable and inclusive futures?

The ability to imaginatively think about the future is a uniquely human skill.

Yet, how we feel about the future, our visions of what could be, and our determination of where and how to invest for the future are often abstract and opaque to others. How might we make those assumptions and stories more real, more concrete, more vivid so that we might individually and collectively make better decisions today?

On March 13, 2019, my colleagues and I hosted The Future’s Happening — an open event at the Hasso Plattner Institute for Design, also known as the Stanford d.school. The Future’s Happening event was conceived to explore, deepen, and extend our individual and collective ability to intentionally and proactively design our preferred futures.

After nearly 15 years of studying, practicing, and teaching design and futures methods, I believe there is an opportunity to raise the profile of the conversation and support more connection between the communities. This is the primary focus of my work at the d.school as a designer in residence, and beyond.

The Future’s Happening was a conversation about the future, for the future. It was a designed experience to visit the future through different modalities to feel the future, to envision the future, to interrogate the future, and to rehearse the future.

There are no facts about the future, but how we feel about it — our posture and sense of agency around it, the processes that we use to understand and interrogate it, and our set of practices and tools to shape it. Those are very much available to us today.

As part of our exploration, we asked three futures pioneers to join us in conversation about futures and design work, to explore how we might be more intentional future-focused designers of our future.

Photo from The Future’s Happening at the Stanford d.school

This article, the first of four highlighting the work of these future pioneers, is focused on Stuart Candy, an Associate Professor of Design at Carnegie Mellon University, where he is responsible for infusing futures into the design curriculum. He is also co-director of Situation Lab — a research unit focused on making futures tangible and accessible, like the Lab’s award-winning card game The Thing From the Future. Stuart is currently working with organizations like NASA Jet Propulsion Laboratory, to explore new ways of scaffolding imagination in support of new space missions, and the United States Conference of Mayors, to introduce city leaders from across the country to ways of envisioning new pathways for their cities.

As both an academic and practitioner of futures work, also known as foresight, he is well versed in the origins of the field, as well as why it is particularly relevant to decision makers today. According to Stuart, it’s useful to think of futures as an inherently interdisciplinary or transdisciplinary field — not just a single discipline in and of itself.

In his seminal book, Future Shock, published in 1970, Alvin Toffler talks about possible, probable, and preferable futures. Explains Stuart, “Each of those three kinds of lenses corresponds to different epistemologies, different forms of knowledge, different types of conversation.”

The possible calls for the arts, which ignites a more imaginative and expansive approach to the future.

The probable calls for science, which is grounded in more predictive tools for modeling systems.

The preferable calls for politics, which is about particular perspectives and values.

Stuart continues, “All three have been co-present in the futures field, but they are not owned by any one discipline, so there hasn’t necessarily been a comfortable home for the ‘futures’ field in the traditional university.” As a result, says Stuart, studies about futures may be found in unexpected places. He earned his Ph.D. in futures within the political science department of the University of Hawaii at Manoa. “That multiplicity and the difficulty of pigeonholing futures,” says Stuart, “has been a strength and a weakness at different times.”

Regardless, from his perspective, the field of futures — and the methods or approaches to thinking about it — is probably more popular today than it has been at any other time in our lives.

During the course of our conversation, I asked Stuart to share the source of his inspiration to bring futures to life through experiences, and why it is so valuable to generate feelings, learning, and insights through that medium. In his view there is a deeply political quality to future making.

“It’s not possible to have an apolitical image of the future. There’s no such thing. Any story that you tell, any stance that you take, is going to be an expression of a certain set of values and interests, and implicitly the exclusion of others. That’s inevitable because it’s not possible to see from all perspectives at once. That isn’t how perspective works.”

What we can do is strive to create circumstances to support seeing from multiple perspectives, and attempting to empathize with and bridge into the experiences of others, past, present and future alike.

“This is one of our perennial struggles as human beings,” explains Stuart. “To be able to toggle back and forth between where we are, and other perspectives that we come into contact with — or that we ordinarily don’t or can’t come into contact with.”

This is particularly the case when we talk about longer time horizons, including future generations. These individuals and communities are not in the conversation. Says Stuart, “to accommodate or provide for the interests of and in a word to care for future generations is inherently a tricky problem, because they aren’t there to speak for themselves.”

According to Stuart, a possible solution is to try to construct virtual opportunities to imaginatively engage with what might happen. In essence, to make time to visit the future. This can be a tremendous challenge because the gravity of the present constantly pulls us back to the problems of the moment, and the press of the imminent constantly outweighs the urgent and the important in the longer run. However, Stuart and his colleagues, through a family of design-driven approaches called ‘experiential futures’, seek to make these hypotheticals feel real enough today to compete with the urgencies of the present.

“We’ve done experiential futures work for legislators, for social impact groups, and for public art campaigns, and specifics from one intervention to another can vary a great deal. We might invite people into immersive scenarios set decades from now, to support community deliberations, or we might launch an imaginary product at a trade show as if it were real today, or we might create tangible artifacts from possible futures of a neighborhood and install them in the streets for the residents to encounter. In all these cases, we are creating invitations to consider a visceral reality that might be lying in wait, rather than a sort of cognitive thought bubble of ‘What if’ that can all be too easily dismissed.”

Which brings us to a very important question. How do we prototype the future in a way that gets us to feel urgency for it? This is important for so many big issues looming before us, such as climate change, political elections, forced migration, among other global challenges.

These methods help us move beyond the feeling that “the future is going to happen to us, and I don’t know what that means.” Instead, we can build more agency bringing that future to life. And, we need to do that not just because it paints a pretty picture of what we want to happen, but because it gets us to feel something which will better inform our decisions today. This might help us get ahead of the crisis or unwanted outcome.

Referring to The Future’s Happening event, Stuart told the group, “The beauty of bringing together design and futures methods is that it takes these conceptual infrastructures developed in the foresight field over the last half century, these handrails for thinking differently at a conceptual level, and knits them to the language of materiality, of making things real with design. You bring the kind of top-down of futures together with the bottom-up of design, and they meet in the middle in this glorious way. Each one contains something in its DNA that the other has historically lacked.”

Stuart’s observations mirror many of the intentional agency-building practices that we focus on building in classes at the d.school. Learning how to navigate ambiguity is a critical one. Moving between abstract and concrete is another. Experimenting rapidly. And learning how to communicate deliberately. Those design abilities are also critical practices of futures work.

What’s clear is that futures and design practices have much to offer each other, and to a greater community of eager problem solvers facing an increasingly complex world. Exploring each field’s origins, applications, and ability to ignite positive change is part of a larger opportunity to shape preferred futures rather than feel at the mercy of the seemingly inevitable.

In the next article in this series, we’ll consider the topic of leadership and futures-centered design. You’ll meet Katherine Fulton — a futures strategist, founder of The Monitor Institute, and thought leader on social impact and philanthropy. She works with leaders of social movements, utilizing networks and ecosystems to build sustainable capacity for positive change. We’ll explore the challenges leaders face in imagining bold futures, the importance of bringing multiple perspectives into the process, and the inherent tensions of investing in the future while managing the pull of the present.

Source: medium.com

Will Artificial Intelligence enhance or hack humanity?

THIS WEEK, I interviewed Yuval Noah Harari, the author of three best-selling books about the history and future of our species, and Fei-Fei Li, one of the pioneers in the field of artificial intelligence. The event was hosted by the Stanford Center for Ethics and Societythe Stanford Institute for Human-Centered Artificial Intelligence, and the Stanford Humanities Center. A transcript of the event follows, and a video is posted below.

Nicholas Thompson: Thank you, Stanford, for inviting us all here. I want this conversation to have three parts: First, lay out where we are; then talk about some of the choices we have to make now; and last, talk about some advice for all the wonderful people in the hall.

Yuval, the last time we talked, you said many, many brilliant things, but one that stuck out was a line where you said, “We are not just in a technological crisis. We are in a philosophical crisis.” So explain what you meant and explain how it ties to AI. Let’s get going with a note of existential angst.

Yuval Noah Harari: Yeah, so I think what’s happening now is that the philosophical framework of the modern world that was established in the 17th and 18th century, around ideas like human agency and individual free will, are being challenged like never before. Not by philosophical ideas, but by practical technologies. And we see more and more questions, which used to be the bread and butter of the philosophy department being moved to the engineering department. And that’s scary, partly because unlike philosophers who are extremely patient people, they can discuss something for thousands of years without reaching any agreement and they’re fine with that, the engineers won’t wait. And even if the engineers are willing to wait, the investors behind the engineers won’t wait. So it means that we don’t have a lot of time. And in order to encapsulate what the crisis is,maybe I can try and formulate an equation to explain what’s happening. And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans. And the AI revolution or crisis is not just AI, it’s also biology. It’s biotech. There is a lot of hype now around AI and computers, but that is just half the story. The other half is the biological knowledge coming from brain science and biology. And once you link that to AI, what you get is the ability to hack humans. And maybe I’ll explain what it means, the ability to hack humans: to create an algorithm that understands me better than I understand myself, and can therefore manipulate me, enhance me, or replace me. And this is something that our philosophical baggage and all our belief in, you know, human agency and free will, and the customer is always right, and the voter knows best, it just falls apart once you have this kind of ability.

NT: Once you have this kind of ability, and it’s used to manipulate or replace you, not if it’s used to enhance you?

YNH: Also when it’s used to enhance you, the question is, who decides what is a good enhancement and what is a bad enhancement? So our immediately, our immediate fallback position is to fall back on the traditional humanist ideas, that the customer is always right, the customers will choose the enhancement. Or the voter is always right, the voters will vote, there will be a political decision about the enhancement. Or if it feels good, do it. We’ll just follow our heart, we’ll just listen to ourselves. None of this works when there is a technology to hack humans on a large scale. You can’t trust your feelings, or the voters, or the customers on that. The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated. So how do you how do you decide what to enhance if, and this is a very deep ethical and philosophical question—again that philosophers have been debating for thousands of years—what is good? What are the good qualities we need to enhance? So if you can’t trust the customer, if you can’t trust the voter, if you can’t trust your feelings, who do you trust? What do you go by?

NT: All right, Fei-Fei, you have a PhD, you have a CS degree, you’re a professor at Stanford, does B times C times D equals HH? Is Yuval’s theory the right way to look at where we’re headed?

Fei-Fei Li: Wow. What a beginning! Thank you, Yuval. One of the things—I’ve been reading Yuval’s books for the past couple of years and talking to you—and I’m very envious of philosophers now because they can propose questions but they don’t have to answer them. Now as an engineer and scientist, I feel like we have to now solve the crisis. And I’m very thankful that Yuval, among other people, have opened up this really important question for us. When you said the AI crisis, I was sitting there thinking, this is a field I loved and feel passionate about and researched for 20 years, and that was just a scientific curiosity of a young scientist entering PhD in AI. What happened that 20 years later it has become a crisis? And it actually speaks of the evolution of AI that, that got me where I am today and got my colleagues at Stanford where we are today with Human-Centered AI, is that this is a transformative technology. It’s a nascent technology. It’s still a budding science compared to physics, chemistry, biology, but with the power of data, computing, and the kind of diverse impact AI is making, it is, like you said, is touching human lives and business in broad and deep ways. And responding to those kinds of questions and crisis that’s facing humanity, I think one of the proposed solutions, that Stanford is making an effort about is, can we reframe the education, the research and the dialog of AI and technology in general in a human-centered way? We’re not necessarily going to find a solution today, but can we involve the humanists, the philosophers, the historians, the political scientists, the economists, the ethicists, the legal scholars, the neuroscientists, the psychologists, and many more other disciplines into the study and development of AI in the next chapter, in the next phase.

“Maybe I can try and formulate an equation to explain what’s happening. And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans.”

YUVAL NOAH HARARI

NT: Don’t be so certain we’re not going to get an answer today. I’ve got two of the smartest people in the world glued to their chairs, and I’ve got 72 more minutes. So let’s let’s give it a shot.

FL: He said we have thousands of years!

NT: Let me go a little bit further on Yuval’s opening statement. There are a lot of crises about AI that people talk about, right? They talk about AI becoming conscious and what will that mean. They talk about job displacement; they talk about biases. And Yuval has very clearly laid out what he thinks is the most important one, which is the combination of biology plus computing plus data leading to hacking. Is that specific concern what people who are thinking about AI should be focused on?

FL: Absolutely. So any technology humanity has created starting with fire is a double-edged sword. So it can bring improvements to life, to work, and to society, but it can bring the perils, and AI has the perils. You know, I wake up every day worried about the diversity, inclusion issue in AI. We worry about fairness or the lack of fairness, privacy, the labor market. So absolutely we need to be concerned and because of that, we need to expand the research, and the development of policies and the dialog of AI beyond just the codes and the products into these human rooms, into the societal issues. So I absolutely agree with you on that, that this is the moment to open the dialog, to open the research in those issues.

NT: Okay.

YNH: Even though I will just say that again, part of my fear is the dialog. I don’t fear AI experts talking with philosophers, I’m fine with that. Historians, good. Literary critics, wonderful. I fear the moment you start talking with biologists. That’s my biggest fear. When you and the biologists realize, “Hey, we actually have a common language. And we can do things together.” And that’s when the really scary things, I think…

FL: Can you elaborate on what is scaring you? That we talk to biologists?

YNH: That’s the moment when you can really hack human beings, not by collecting data about our search words or our purchasing habits, or where do we go about town, but you can actually start peering inside, and collect data directly from our hearts and from our brains.

FL: Okay, can I be specific? First of all the birth of AI is AI scientists talking to biologists, specifically neuroscientists, right. The birth of AI is very much inspired by what the brain does. Fast forward to 60 years later, today’s AI is making great improvements in healthcare. There’s a lot of data from our physiology and pathology being collected and using machine learning to help us. But I feel like you’re talking about something else.

YNH: That’s part of it. I mean, if there wasn’t a great promise in the technology, there would also be no danger because nobody would go along that path. I mean, obviously, there are enormously beneficial things that AI can do for us, especially when it is linked with biology. We are about to get the best healthcare in the world, in history, and the cheapest and available for billions of people by their smartphones. And this is why it is almost impossible to resist the temptation. And with all the issues of privacy, if you have a big battle between privacy and health, health is likely to win hands down. So I fully agree with that. And you know, my job as a historian, as a philosopher, as a social critic is to point out the dangers in that. Because, especially in Silicon Valley, people are very much familiar with the advantages, but they don’t like to think so much about the dangers. And the big danger is what happens when you can hack the brain and that can serve not just your healthcare provider, that can serve so many things for a crazy dictator.

NT: Let’s focus on what it means to hack the brain. Right now, in some ways my brain is hacked, right? There’s an allure of this device, it wants me to check it constantly, like my brain has been a little bit hacked. Yours hasn’t because you meditate two hours a day, but mine has and probably most of these people have. But what exactly is the future brain hacking going to be that it isn’t today?

YNH: Much more of the same, but on a much larger scale. I mean, the point when, for example, more and more of your personal decisions in life are being outsourced to an algorithm that is just so much better than you. So you know, you have we have two distinct dystopias that kind of mesh together. We have the dystopia of surveillance capitalism, in which there is no like Big Brother dictator, but more and more of your decisions are being made by an algorithm. And it’s not just decisions about what to eat or where to shop, but decisions like where to work and where to study, and whom to date and whom to marry and whom to vote for. It’s the same logic. And I would be curious to hear if you think that there is anything in humans which is by definition unhackable. That we can’t reach a point when the algorithm can make that decision better than me. So that’s one line of dystopia, which is a bit more familiar in this part of the world. And then you have the full fledged dystopia of a totalitarian regime based on a total surveillance system. Something like the totalitarian regimes that we have seen in the 20th century, but augmented with biometric sensors and the ability to basically track each and every individual 24 hours a day.

And you know, which in the days of Stalin or Hitler was absolutely impossible because they didn’t have the technology, but maybe might be possible in 20 years, 30 years. So, we can choose which dystopia to discuss but they are very close…

NT: Let’s choose the liberal democracy dystopia. Fei-Fei, do you want to answer Yuval’s specific question, which is, Is there something in Dystopia A, liberal democracy dystopia, is there something endemic to humans that cannot be hacked?

FL: So when you asked me that question, just two minutes ago, the first word that came to my mind is Love. Is love hackable?

YNH: Ask Tinder, I don’t know.

FL: Dating!

YNH: That’s a defense…

FL: Dating is not the entirety of love, I hope.

YNH: But the question is, which kind of love are you referring to? if you’re referring to Greek philosophical love or the loving kindness of Buddhism, that’s one question, which I think is much more complicated. If you are referring to the biological, mammalian courtship rituals, then I think yes. I mean, why not? Why is it different from anything else that is happening in the body?

FL: But humans are humans because we’re—there’s some part of us that is beyond the mammalian courtship, right? Is that part hackable?

YNH: So that’s the question. I mean, you know, in most science fiction books and movies, they give your answer. When the extraterrestrial evil robots are about to conquer planet Earth, and nothing can resist them, resistance is futile, at the very last moment, humans win because the robots don’t understand love.

FL: The last moment is one heroic white dude that saves us. But okay so the two dystopias, I do not have answers to the two dystopias. But what I want to keep saying is, this is precisely why this is the moment that we need to seek for solutions. This is precisely why this is the moment that we believe the new chapter of AI needs to be written by cross-pollinating efforts from humanists, social scientists, to business leaders, to civil society, to governments, to come at the same table to have that multilateral and cooperative conversation. I think you really bring out the urgency and the importance and the scale of this potential crisis. But I think, in the face of that, we need to act.

“The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated.”

YUVAL NOAH HARARI

YNH: Yeah, and I agree that we need cooperation that we need much closer cooperation between engineers and philosophers or engineers and historians. And also from a philosophical perspective, I think there is something wonderful about engineers, philosophically—

FL: Thank you!

YNH: — that they really cut the bullshit. I mean, philosophers can talk and talk, you know, in cloudy and flowery metaphors, and then the engineers can really focus the question. Like I just had a discussion the other day with an engineer from Google about this, and he said, “Okay, I know how to maximize people’s time on the website. If somebody comes to me and tells me, ‘Look, your job is to maximize time on this application.’ I know how to do it because I know how to measure it. But if somebody comes along and tells me, ‘Well, you need to maximize human flourishing, or you need to maximize universal love.’ I don’t know what it means.” So the engineers go back to the philosophers and ask them, “What do you actually mean?” Which, you know, a lot of philosophical theories collapse around that, because they can’t really explain that—and we need this kind of collaboration.

FL: Yeah. We need an equation for that.

NT: But Yuval, is Fei-Fei right? If we can’t explain and we can’t code love, can artificial intelligence ever recreate it, or is it something intrinsic to humans that the machines will never emulate?

YNH: I don’t think that machines will feel love. But you don’t necessarily need to feel it, in order to be able to hack it, to monitor it, to predict it, to manipulate it. So machines don’t like to play Candy Crush, but they can still—

NT: So you think this device, in some future where it’s infinitely more powerful than it is right now, it could make me fall in love with somebody in the audience?

YNH: That goes to the question of consciousness and mind, and I don’t think that we have the understanding of what consciousness is to answer the question whether a non-organic consciousness is possible or is not possible, I think we just don’t know. But again, the bar for hacking humans is much lower. The machines don’t need to have consciousness of their own in order to predict our choices and manipulate our choices. If you accept that something like love is in the end and biological process in the body, if you think that AI can provide us with wonderful healthcare, by being able to monitor and predict something like the flu, or something like cancer, what’s the essential difference between flu and love? In the sense of is this biological, and this is something else, which is so separated from the biological reality of the body, that even if we have a machine that is capable of monitoring or predicting flu, it still lacks something essential in order to do the same thing with love.

FL: So I want to make two comments and this is where my engineering, you know, personally speaking, we’re making two very important assumptions in this part of the conversation. One is that AI is so omnipotent, that it’s achieved to a state that it’s beyond predicting anything physical, it’s getting to the consciousness level, it’s getting to even the ultimate love level of
capability. And I do want to make sure that we recognize that we’re very, very, very far from that. This technology is still very nascent. Part of the concern I have about today’s AI is that super-hyping of its capability. So I’m not saying that that’s not a valid question. But I think that part of this conversation is built upon that assumption that this technology has become that powerful and I don’t even know how many decades we are from that. Second related assumption, I feel our conversation is being based on this that we’re talking about the world or state of the world that only that powerful AI exists, or that small group of people who have produced the powerful AI and is intended to hack humans exists. But in fact, our human society is so complex, there’s so many of us, right? I mean humanity in its history, have faced so much technology if we left it in the hands of a bad player alone, without any regulation, multinational collaboration, rules, laws, moral codes, that technology could have, maybe not hacked humans, but destroyed humans or hurt humans in massive ways. It has happened, but by and large, our society in a historical view is moving to a more civilized and controlled state. So I think it’s important to look at that greater society and bring other players and people into this dialog. So we don’t talk like there’s only this omnipotent AI deciding it’s gonna hack everything to the end. And that brings me to your topic that in addition to hacking humans at that level that you’re talking about, there are some very immediate concerns already: diversity, privacy, labor, legal changes, you know, international geopolitics. And I think it’s, it’s critical to to tackle those now.

NT: I love talking to AI researchers, because five years ago, all the AI researchers were saying it’s much more powerful than you think. And now they’re like, it’s not as powerful as you think. Alright, so I’ll just let me ask—

FL: It’s because five years ago, you had no idea what AI is, now you’re extrapolating too much.

NT: I didn’t say it was wrong. I just said it was the thing. I want to go into what you just said. But before we do that, I want to take one question here from the audience, because once we move into the second section we’ll be able to answer it. So the question is for Yuval, How can we avoid the formation of AI powered digital dictatorships? So how do we avoid dystopia number two, let’s enter that. And then let’s go, Fei-Fei, into what we can do right now, not what we can do in the future.

YNH: The key issue is how to regulate the ownership of data. Because we won’t stop research in biology, and we won’t stop researching computer science and AI. So from the three components of biological knowledge, computing power and data, I think data is is the easiest, and it’s also very difficult, but still the easiest kind to regulate, to protect. Let’s place some protections there. And there are efforts now being made. And they are not just political efforts, but you know, also philosophical efforts to really conceptualize, What does it mean to own data or to regulate the ownership of data? Because we have a fairly good understanding of what it means to own land. We had thousands of years of experience with that. We have a very poor understanding of what it what it actually means to own data and how to regulate it. But this is the very important front that we need to focus on in order to prevent the worst dystopian outcomes.

And I agree that AI is not nearly as powerful as some people imagine. But this is why I think we need to place the bar low, to reach a critical threshold. We don’t need the AI to know us perfectly, which will never happen. We just need the AI to know us better than we know ourselves, which is not so difficult because most people don’t know themselves very well and often make huge mistakes in critical decisions. So whether it’s finance or career or love life, to have this shifting authority from humans to algorithm, they can still be terrible. But as long as they are a bit less terrible than us, the authority will shift to them.

NT: In your book, you tell a very illuminating story about your own self and your own coming to terms with who you are and how you could be manipulated. Will you tell that story here about coming to terms with your sexuality and the story you told about Coca-Cola in your book? Because I think that will make it clear what you mean here very well.

YNH: Yes. So I I said, I only realized that I was gay when I was 21. And I look back at the time and I was I don’t know 15, 17 and it should have been so obvious. It’s not like I’m a stranger. I’m with myself 24 hours a day. And I just don’t notice any of like the screaming signs that are saying, “You are gay.” And I don’t know how, but the fact is, I missed it. Now in AI, even a very stupid AI today, will not miss it.

FL: I’m not so sure!

YNH: So imagine, this is not like a science fiction scenario of a century from now, this can happen today that you can write all kinds of algorithms that, you know, they’re not perfect, but they are still better, say, than the average teenager. And what does it mean to live in a world in which you learn about something so important about yourself from an algorithm? What does it mean, what happens if the algorithm doesn’t share the information with you, but it shares the information with advertisers? Or with governments? So if you want to, and I think we should, go down from the cloud, the heights, of you know, the extreme scenarios, to the practicalities of day-to-day life. This is a good example, because this is already happening.

NT: Well, let’s take the elevator down to the more conceptual level. Let’s talk about what we can do today, as we think about the risks of AI, the benefits of AI, and tell us you know, sort of your your punch list of what you think the most important things we should be thinking about with AI are.

FL: Oh boy, there’s so many things we could do today. And I cannot agree more with Yuval, that this is such an important topic. Again, I’m gonna try to speak about all the efforts that have been made at Stanford because I think this is a good representation of what we believed are so many efforts we can do. So in human-centered AI, in which this is the overall theme, we believe that the next chapter of AI should be human-centered, we believe in three major principles. One principle is to invest in the next generation of AI technology that reflects more of the kind of human intelligence we would like. I was just thinking about your comment about as dependence on data and how the policy and governance of data should emerge in order to regulate and govern the AI impact. Well, we should be developing technology that can explain AI, we call it explainable AI, or AI interpretability studies; we should be focusing on technology that has a more nuanced understanding of human intelligence. We should be investing in the development of less data-dependent AI technology, that will take into considerations of intuition, knowledge, creativity and other forms of human intelligence. So that kind of human intelligence inspired AI is one of our principles.

The second principle is to, again, welcome in the kind of multidisciplinary study of AI. Cross-pollinating with economics, with ethics, with law, with philosophy, with history, cognitive science and so on. Because there is so much more we need to understand in terms of a social, human, anthropological, ethical impact. And\ we cannot possibly do this alone as technologists. Some of us shouldn’t even be doing this. It’s the ethicists, philosophers who should participate and work with us on these issues. So that’s the second principle. And within this, we work with policymakers. We convene the kind of dialogs of multilateral stakeholders.

Then the third, last but not least, I think, Nick, you said that at the very beginning of this conversation, that we need to promote the human-enhancing and collaborative and argumentative aspect of this technology. You have a point. Even there, it can become manipulative. But we need to start with that sense of alertness, understanding, but still promote the kind of benevolent application and design of this technology. At least, these are the three principles that Stanford’s Human-centered AI Institute is based on. And I just feel very proud, within the short few months since the birth of this institute, there are more than 200 faculty involved on this campus in this kind of research, dialog, study, education, and that number is still growing.

NT: Of those three principles, let’s start digging into them. So let’s go to number one, explainability, because this is a really interesting debate in artificial intelligence. So there’s some practitioners who say you should have algorithms that can explain what they did and the choices they made. Sounds eminently sensible. But how do you do that? I make all kinds of decisions that I can’t entirely explain. Like, why did I hire this person, not that person? I can tell a story about why I did it. But I don’t know for sure. If we don’t know ourselves well enough to always be able to truthfully and fully explain what we did, how can we expect a computer, using AI, to do that? And if we demand that here in the West, then there are other parts of the world that don’t demand that who may be able to move faster. So why don’t I ask you the first part of that question, and Yuval all the second part of that question. So the first part is, can we actually get explainability if it’s super hard even within ourselves?

FL: Well, it’s pretty hard for me to multiply two digits, but, you know, computers can do that. So the fact that something is hard for humans doesn’t mean we shouldn’t try to get the machines to do it. Especially, you know, after all these algorithms are based on very simple mathematical logic. Granted, we’re dealing with neural networks these days that have millions of nodes and billions of connections. So explainability is actually tough. It’s ongoing research. But I think this is such fertile ground. And it’s so critical when it comes to healthcare decisions, financial decisions, legal decisions. There’s so many scenarios where this technology can be potentially positively useful, but with that kind of explainable capability, so we’ve got to try and I’m pretty confident with a lot of smart minds out there, this is a crackable thing.

On top of that, I think you have a point that if we have technology that can explain the decision-making process of algorithms, it makes it harder for it to manipulate and cheat. Right? It’s a technical solution, not the entirety of the solution, that will contribute to the clarification of what this technology is doing.

YNH: But because, presumably, the AI makes decisions in a radically different way than humans, then even if the AI explains its logic, the fear is it will make absolutely no sense to most humans. Most humans, when they are asked to explain a decision, they tell a story in a narrative form, which may or may not reflect what is actually happening within them. In many cases, it doesn’t reflect, it’s just a made up rationalization and not the real thing. Now an AI could be much different than a human in telling me, like I applied to the bank for loans. And the bank says no. And I asked why not? And the bank says okay, we will ask our AI. And the AI gives this extremely long statistical analysis based not on one or two salient feature of my life, but on 2,517 different data points, which it took into account and gave different weights. And why did you give this this weight? And why did you give… Oh, there is another book about that. And most of the data points to a human would seem completely irrelevant. You applied for a loan on Monday, and not on Wednesday, and the AI discovered that for whatever reason, it’s after the weekend, whatever, people who apply for loans on a Monday are 0.075 percent less likely to repay the loan. So it goes into into the equation. And I get this book of the real explanation. And finally, I get a real explanation. It’s not like sitting with a human banker that just bullshits me.

FL: So are you rooting for AI? Are you saying AI is good in this case?

YNH: In many cases, yes. I mean, I think in many cases, it’s two sides of the coin. I think that in many ways, the AI in this scenario will be an improvement over the human banker. Because for example, you can really know what the decision is based on presumably, right, but it’s based on something that I as a human being just cannot grasp. I just don’t—I know how to deal with simple narrative stories. I didn’t give you a loan because you’re gay. That’s not good. Or because you didn’t repay any of your previous loans. Okay, I can understand that. But my mind doesn’t know what to do with the real explanation that the AI will give, which is just this crazy statistical thing …

“Part of the concern I have about today’s AI is that super-hyping of its capability. Part of this conversation is built upon that assumption that this technology has become that powerful and I don’t even know how many decades we are from that.”

FEI-FEI LI

FL: So there’s two layers to your comment. One is how do you trust and be able to comprehend AI’s explanation? Second is actually can AI be used to make humans more trustful or be more trustworthy as humans. The first point, I agree with you, if AI gives you 2,000 dimensions of potential features with probability, it’s not understandable, but the entire history of science in human civilization is to be able to communicate the results of science in better and better ways. Right? Like I just had my annual physical and a whole bunch of numbers came to my cell phone. And, well, first of all my doctors, the experts, can help me to explain these numbers. Now even Wikipedia can help me to explain some of these numbers, but the technological improvements of explaining these will improve. It’s our failure as a technologists if we just throw 200 or 2,000 dimensions of probability numbers at you.

YNH: But this is the explanation. And I think that the point you raised is very important. But I see it differently. I think science is getting worse and worse in explaining its theories and findings to the general public, which is the reason for things like doubting climate change, and so forth. And it’s not really even the fault of the scientists, because the science is just getting more and more complicated. And reality is extremely complicated. And the human mind wasn’t adapted to understanding the dynamics of climate change, or the real reasons for refusing to give somebody a loan. But that’s the point when you have an — and let’s put aside the whole question of manipulation and how can I trust. Let’s assume the AI is benign. And let’s assume there are no hidden biases and everything is ok. But still, I can’t understand.

FL: But that’s why people like Nick, the storyteller, has to explain… What I’m saying, You’re right. It’s very complex.

NT: I’m going to lose my job to a computer like next week, but I’m happy to have your confidence with me!

FL: But that’s the job of the society collectively to explain the complex science. I’m not saying we’re doing a great job at all. But I’m saying there is hope if we try.

YNH: But my fear is that we just really can’t do it. Because the human mind is not built for dealing with these kinds of explanations and technologies. And it’s true for, I mean, it’s true for the individual customer who goes to the bank and the bank refused to give them a loan. And it can even be on the level, I mean, how many people today on earth understand the financial system? How many presidents and prime ministers understand the financial system?

NT: In this country, it’s zero.

YNH: So what does it mean to live in a society where the people who are supposed to be running the business… And again, it’s not the fault of a particular politician, it’s just the financial system has become so complicated. And I don’t think that economists are trying on purpose to hide something from the general public. It’s just extremely complicated. You have some of the wisest people in the world, going to the finance industry, and creating these enormously complex models and tools, which objectively you just can’t explain to most people, unless first of all, they study economics and mathematics for 10 years or whatever. So I think this is a real crisis. And this is again, this is part of the philosophical crisis we started with. And the undermining of human agency. That’s part of what’s happening, that we have these extremely intelligent tools that are able to make perhaps better decisions about our healthcare, about our financial system, but we can’t understand what they are doing and why they’re doing it. And this undermines our autonomy and our authority. And we don’t know as a society how to deal with that.

NT: Ideally, Fei-Fei’s institute will help that. But before we leave this topic, I want to move to a very closely related question, which I think is one of the most interesting, which is the question of bias in algorithms, which is something you’ve spoken eloquently about. And let’s start with the financial system. So you can imagine an algorithm used by a bank to determine whether somebody should get a loan. And you can imagine training it on historical data and historical data is racist. And we don’t want that. So let’s figure out how to make sure the data isn’t racist, and that it gives loans to people regardless of race. And we probably all, everybody in this room agrees that that is a good outcome.

But let’s say that analyzing the historical data suggests that women are more likely to repay their loans than men. Do we strip that out? Or do we allow that to stay in? If you allow it to stay in, you get a slightly more efficient financial system? If you strip it out, you have a little more equality before between men and women. How do you make decisions about what biases you want to strip and which ones are okay to keep?

FL: Yeah, that’s an excellent question, Nick. I mean, I’m not going to have the answers personally, but I think you touch on the really important question, which is, first of all, machine learning system bias is a real thing. You know, like you said, it starts with data, it probably starts with the very moment we’re collecting data and the type of data we’re collecting all the way through the whole pipeline, and then all the way to the application. But biases come in very complex ways. At Stanford, we have machine learning scientists studying the technical solutions of bias, like, you know, de-biasing data or normalizing certain decision making. But we also have humanists debating about what is bias, what is fairness, when is bias good, when is bias bad? So I think you just opened up a perfect topic for research and debate and conversation in this in this topic. And I also want to point out that you’ve already used a very closely related example, a machine learning algorithm has a potential to actually expose bias. Right? You know, one of my favorite studies was a paper a couple of years ago analyzing Hollywood movies and using a machine learning face-recognition algorithm, which is a very controversial technology these days, to recognize Hollywood systematically gives more screen time to male actors than female actors. No human being can sit there and count all the frames of faces and whether there is gender bias and this is a perfect example of using machine learning to expose. So in general there’s a rich set of issues we should study and again, bring the humanists, bring the ethicist, bring the legal scholars, bring the gender study experts.

NT: Agreed. Though, standing up for humans, I knew Hollywood was sexist even before that paper. but yes, agreed.

FL: You’re a smart human.

NT: Yuval, on that question of the loans, do you strip out the racist data, you strip out the gender data? What biases you get rid of what biases do you not?

YNH: I don’t think there is a one size fits all. I mean, it’s a question we, again, we need this day-to-day collaboration between engineers and ethicists and psychologists and political scientists

NT: But not biologists, right?

YNH: And increasingly, also biologists! And, you know, it goes back to the question, what should we do? So, we should teach ethics to coders as part of the curriculum, that the people today in the world that most need a background in ethics, are the people in the computer science departments. So it should be an integral part of the curriculum. And also in the big corporations, which are designing these tools, should be embedded within the teams, people with backgrounds in things like ethics, like politics, that they always think in terms of what biases might we inadvertently be building into our system? What could be the cultural or political implications of what we’re building? It shouldn’t be a kind of afterthought that you create this neat technical gadget, it goes into the world, something bad happens, and then you start thinking, “Oh, we didn’t see this one coming. What do we do now?” From the very beginning, it should be clear that this is part of the process.

FL: I do want to give a shout out to Rob Reich, who introduced this whole event. He and my colleagues, Mehran Sahami and a few other Stanford professors have opened this course called Computers, Ethics and Public Policy. This is exactly the kind of class that’s needed. I think this quarter the offering has more than 300 students signed up for that.

“We should be focusing on technology that has a more nuanced understanding of human intelligence.”

FEI-FEI LI

NT: Fantastic. I wish that course has existed when I was a student here. Let me ask an excellent question from the audience that ties into this. How do you reconcile the inherent trade-offs between explainability and efficacy and accuracy of algorithms?

FL: Great question. This question seems to be assuming if you can explain that you’re less good or less accurate?

NT: Well, you can imagine that if you require explainability, you lose some level of efficiency, you’re adding a little bit of complexity to the algorithm.

FL: So, okay, first of all, I don’t necessarily believe in that. There’s no mathematical logic to this assumption. Second, let’s assume there is a possibility that an explainable algorithm suffers in efficiency. I think this is a societal decision we have to make. You know, when we put the seatbelt in our car driving, that’s a little bit of an efficiency loss because I have to do the seat belt movement instead of just hopping in and driving. But as a society, we decided we can afford that loss of efficiency because we care more about human safety. So I think AI is the same kind of technology. As we make these kind of decisions going forward in our solutions, in our products, we have to balance human well-being and societal well-being with efficiency.

NT: So Yuval, let me ask you the global consequences of this. This is something that a number of people have asked about in different ways and we’ve touched on but we haven’t hit head on. There are two countries, imagine you have Country A and you have Country B. Country A says all of you AI engineers, you have to make it explainable. You have to take ethics classes, you have to really think about the consequences and what you’re doing. You got to have dinner with biologists, you have to think about love, and you have to like read John Locke, that’s Country A. Country B says, just go build some stuff, right? These two countries at some point are going to come in conflict, and I’m going to guess that Country B’s technology might be ahead of Country A’s. Is that a concern?

YNH: Yeah, that’s always the concern with arms races, which become a race to the bottom in the name of efficiency and domination. I mean, what is extremely problematic or dangerous about the situation now with AI, is that more and more countries are waking up to the realization that this could be the technology of domination in the 21st century. So you’re not talking about just any economic competition between the different textile industries or even between different oil industries, like one country decides to we don’t care about the environment at all, we’ll just go full gas ahead and the other countries are much more environmentally aware. The situation with AI is potentially much worse, because it could be really the technology of domination in the 21st century. And those left behind could be dominated, exploited, conquered by those who forge ahead. So nobody wants to stay behind. And I think the only way to prevent this kind of catastrophic arms race to the bottom is greater global cooperation around AI. Now, this sounds utopian because we are now moving in exactly the opposite direction of more and more rivalry and competition. But this is part of, I think, of our job, like with the nuclear arms race, to make people in different countries realize that this is an arms race, that whoever wins, humanity loses. And it’s the same with AI. If AI becomes an arms race, then this is extremely bad news for all humans. And it’s easy for, say, people in the US to say we are the good guys in this race, you should be cheering for us. But this is becoming more and more difficult in a situation when the motto of the day is America First. How can we trust the USA to be the leader in AI technology, if ultimately it will serve only American interests and American economic and political domination? So I think, most people when they think arms race in AI, they think USA versus China, but there are almost 200 other countries in the world. And most of them are far, far behind. And when they look at what is happening, they are increasingly terrified. And for a very good reason.

NT: The historical example you’ve made is a little unsettling. Because, if I heard your answer correctly, it’s that we need global cooperation. And if we don’t, we’re going to need an arms race. In the actual nuclear arms race, we tried for global cooperation from, I don’t know, roughly 1945 to 1950. And then we gave up and then we said, We’re going full throttle in the United States. And then, Why did the Cold War end the way it did? Who knows but one argument would be that the United States and its relentless buildup of nuclear weapons helped to keep the peace until the Soviet Union collapsed. So if that is the parallel, then what might happen here is we’ll try for global cooperation and 2019, 2020, and 2021 and then we’ll be off in an arms race. A, is that likely and B, if it is, would you say well, then the US needs to really move full throttle on AI because it will be better for the liberal democracies to have artificial intelligence than totalitarian states?

YNH: Well, I’m afraid it is very likely that cooperation will break down and we will find ourselves in an extreme version of an arms race. And in a way it’s worse than the nuclear arms race because with nukes, at least until today, countries developed them, but never use them. AI will be used all the time. It’s not something you have on the shelf for some Doomsday war. It will be used all the time to create potentially total surveillance regimes and extreme totalitarian systems, in one way or the other. And so, from this perspective, I think the danger is far greater. You could say that the nuclear arms race actually saved democracy and the free market and, you know, rock and roll and Woodstock and then the hippies and they all owe a huge debt to nuclear weapons. Because if nuclear weapons weren’t invented, there would have been a conventional arms race and conventional military buildup between the Soviet bloc and the American bloc. And that would have meant total mobilization of society. If the Soviets are having total mobilization, the only way the Americans can compete is to do the same.

Now what actually happened was that you had an extreme totalitarian mobilized society in the communist bloc. But thanks to nuclear weapons, you didn’t have to do it in the United States or in Western Germany, or in France, because we relied on nukes. You don’t need millions of conscripts in the army.

And with AI it is going to be just the opposite, that the technology will not only be developed, it will be used all the time. And that’s a very scary scenario.

FL: Wait, can I just add one thing? I don’t know history like you do, but you said AI is different from nuclear technology. I do want to point out, it is very different because at the same time as you’re talking about these scarier situations, this technology has a wide international scientific collaboration that is being used to make transportation better, to improve healthcare, to improve education. And so it’s a very interesting new time that we haven’t seen before because while we have this kind of competition, we also have massive international scientific community collaboration on these benevolent uses and democratization of this technology. I just think it’s important to see both sides of this.

YNH: You’re absolutely right here. There are some, as I said, there’s also enormous benefits to this technology.

FL: And in a in a globally collaborative way, especially between and among scientists.

YNH: The global aspect is is more complicated, because the question is, what happens if there is a huge gap in abilities between some countries and most of the world? Would we have a rerun of the 19th century Industrial Revolution when the few industrial powers conquer and dominate and exploit the entire world, both economically and politically? What’s to prevent that from repeating? So even in terms of, you know, without this scary war scenario, we might still find ourselves with global exploitation regime, in which the benefits, most of the benefits, go to a small number of countries at the expense of everybody else.

FL: So students in the audience will laugh at this but we are in a very different scientific research climate. The kind of globalization of technology and technique happens in a way that the 19th century, even the 20th century, never saw before. Any paper that is a basic science research paper in AI today or technical technique that is produced, let’s say this week at Stanford, it’s easily globally distributed through this thing called arXiv or GitHub repository or—

YNH: The information is out there. Yeah.

FL: The globalization of this scientific technology travels in a different way from the 19th and 20th century. I don’t doubt there is confined development of this technology, maybe by regimes. But we do have to recognize that this global reach, the differences are pretty sharp now. And we might need to take that into consideration. That the scenario you’re describing is harder, I’m not saying impossible, but harder to happen.

YNH: I’ll just say that it’s not just the scientific papers. Yes, the scientific papers are there. But if I live in Yemen, or in Nicaragua, or in Indonesia or in Gaza, yes, I can connect to the internet and download the paper. What will I do with that? I don’t have the data, I don’t have the infrastructure. I mean, you look at where the big corporations are coming from, that hold all the data of the world, they’re basically coming from just two places. I mean, even Europe is not really in the competition. There is no European Google, or European Amazon, or European Baidu, of European Tencent. And if you look beyond Europe, you think about Central America, you think about most of Africa, the Middle East, much of Southeast Asia, it’s, yes, the basic scientific knowledge is out there, but this is just one of the components that go to creating something that can compete with Amazon or with Tencent, or with the abilities of governments like the US government or like the Chinese government. So I agree that the dissemination of information and basic scientific knowledge are in a completely different place than the 19th century.

NT: Let me ask you about that, because it’s something three or four people have asked in the questions, which is, it seems like there could be a centralizing force of artificial intelligence that will make whoever has the data and the best computer more powerful and it could then accentuate income inequality, both within countries and within the world, right? You can imagine the countries you’ve just mentioned, the United States, China, Europe lagging behind, Canada somewhere behind, way ahead of Central America, it could accentuate global income inequality. A, do you think that’s likely and B, how much does it worry you?

YNH: As I said, it’s very likely it’s already happening. And it’s extremely dangerous. Because the economic and political consequences could be catastrophic. We are talking about the potential collapse of entire economies and countries, countries that depend on cheap manual labor, and they just don’t have the educational capital to compete in a world of AI. So what are these countries going to do? I mean, if, say, you shift back most production from, say, Honduras or Bangladesh to the USA and to Germany, because the human salaries are no longer part of the equation and it’s cheaper to produce the shirt in California than in Honduras, so what will the people there do? And you can say, okay, but there will be many more jobs for software engineers. But we are not teaching the kids in Honduras to be software engineers. So maybe a few of them could somehow immigrate to the US. But most of them won’t and what will they do? And we, at present, we don’t have the economic answers and the political answers to these questions.

FL: I think that’s fair enough, I think Yuval definitely has laid out some of the critical pitfalls of this and, and that’s why we need more people to be studying and thinking about this. One of the things we over and over noticed, even in this process of building the community of human-centered AI and also talking to people both internally and externally, is that there are opportunities for businesses around the world and governments around the world to think about their data and AI strategy. There are still many opportunities outside of the big players, in terms of companies and countries, to really come to the realization that it’s an important moment for their country, for their region, for their business, to transform into this digital age. And I think when you talk about these potential dangers and lack of data in parts of the world that haven’t really caught up with this digital transformation, the moment is now and we hope to, you know, raise that kind of awareness and encourage that kind of transformation.

YNH: Yeah, I think it’s very urgent. I mean, what we are seeing at the moment is, on the one hand, what you could call some kind of data colonization, that the same model that we saw in the 19th century that you have the imperial hub, where they have the advanced technology, they grow the cotton in India or Egypt, they send the raw materials to Britain, they produce the shirts, the high tech industry of the 19th century in Manchester, and they send the shirts back to sell them in in India and outcompete the local producers. And we, in a way, might be beginning to see the same thing now with the data economy, that they harvest the data in places also like Brazil and Indonesia, but they don’t process the data there. The data from Brazil and Indonesia, goes to California or goes to eastern China being processed there. They produce the wonderful new gadgets and technologies and sell them back as finished products to the provinces or to the colonies.

Now it’s not a one-to-one. It’s not the same, there are differences. But I think we need to keep this analogy in mind. And another thing that maybe we need to keep in mind in this respect, I think, is the reemergence of stone walls—originally my speciality was medieval military history. This is how I began my academic career with the Crusades and castles and knights and so forth. And now I’m doing all these cyborgs and AI stuff. But suddenly, there is something that I know from back then, the walls are coming back. I try to kind of look at what’s happening here. I mean, we have virtual realities. We have 3G, AI and suddenly the hottest political issue is building a stone wall. Like the most low-tech thing you can imagine. And what is the significance of a stone wall in a world of interconnectivity and and all that? And it really frightens me that there is something very sinister there. The combination of data is flowing around everywhere so easily, but more and more countries and also my home country of Israel, it’s the same thing. You have the, you know, the startup nation, and then the wall. And what does it mean this combination?

NT: Fei-Fei, you want to answer that?

FL: Maybe we can look at the next question!

NT: You know what? Let’s go to the next question, which is tied to that. And the next question is: you have the people here at Stanford who will help build these companies, who will either be furthering the process of data colonization, or reversing it or who will be building, you know, the efforts to create a virtual wall and world based on artificial intelligence are being created, or funded at least by a Stanford graduate. So you have all these students here in the room, how do you want them to be thinking about artificial intelligence? And what do you want them to learn? Let’s, let’s spend the last 10 minutes of this conversation talking about what everybody here should be doing.

FL: So if you’re a computer science or engineering student, take Rob’s class. If you’re humanists take my class. And all of you read Yuval’s books.

NT: Are his books on your syllabus?

FL: Not on mine. Sorry! I teach hardcore deep learning. His book doesn’t have equations. But seriously, what I meant to say is that Stanford students, you have a great opportunity. We have a proud history of bringing this technology to life. Stanford was at the forefront of the birth of AI. In fact, our Professor John McCarthy coined the term artificial intelligence and came to Stanford in 1963 and started this nation’s, one of the two oldest, AI labs in this country. And since then, Stanford’s AI research has been at the forefront of every wave of AI changes. And in 2019 we’re also at the forefront of starting the human-centered AI revolution or the writing of the new AI chapter. And we did all this for the past 60 years for you guys, for the people who come through the door and who will graduate and become practitioners, leaders, and part of the civil society and that’s really what the bottom line is about. Human-centered AI needs to be written by the next generation of technologists who have taken classes like Rob’s class, to think about the ethical implications, the human well being. And it’s also going to be written by those potential future policymakers who came out of Stanford’s humanities studies and Business School, who are versed in the details of the technology, who understand the implications of this technology, and who have the capability to communicate with the technologists. That is, no matter how we agree and disagree, that’s the bottom line, is that we need this kind of multilingual leaders and thinkers and practitioners. And that is what Stanford’s Human-centered AI Institute is about.

NT: Yuval, how do you answer that question?

YNH: On the individual level, I think it’s important for every individual whether in Stanford, whether an engineer or not, to get to know yourself better, because you’re now in a competition. It’s the oldest advice in all the books in philosophies is know yourself. We’ve heard it from Socrates, from Confucius, from Buddha: get to know yourself. But there is a difference, which is that now you have competition. In the day of Socrates or Buddha, if you didn’t make the effort, okay, so you missed on enlightenment. But still, the king wasn’t competing with you. They didn’t have the technology. Now you have competition. You’re competing against these giant corporations and governments. If they get to know you better than you know yourself, the game is over. So you need to buy yourself some time and the first way to buy yourself some time is to get to know yourself better, and then they have more ground to cover. For engineers and students, I would say—I’ll focus on it on engineers maybe—the two things that I would like to see coming out from the laboratories and and the engineering departments, is first, tools that inherently work better in a decentralized system than in a centralized system. I don’t know how to do it. But I hope this is something that engineers can can work with. I heard that blockchain is like the big promise in in that area, I don’t know. But whatever it is, part of when you start designing the tool, part of the specification of what this tool should be like, I would say, this tool should work better in a decentralized system than in a centralized system. That’s the best defense of democracy.

NT: I don’t want to cut you off, because I want you to get to the second thing. But how do you make a tool work better in a democracy?

YNH: I’m not an engineer, I don’t know.

NT: Okay. Go to part two. Someone in this room, figure that out, because it’s very important.

YNH: And I can give you historical examples of tools that work better in this way or in that way. But I don’t know how to translate it into present day technology.

NT: Go to part two because I got a few more questions from the audience.

YNH: Okay, so the other thing I would like to see coming is an AI sidekick that serves me and not some corporation or government. I mean, we can’t stop the progress of this kind of technology, but I would like to see it serving me. So yes, it can hack me but it hacks me in order to protect me. Like my computer has an antivirus but by brain hasn’t. It has a biological antivirus against the flu or whatever, but not against hackers and trolls and so forth. So, one project to work on is to create an AI sidekick, which I paid for, maybe a lot of money and it belongs to me, and it follows me and it monitors me and what I do in my interactions, but everything it learns, it learns in order to protect me from manipulation by other AIs, by other outside influencers. So this is something that I think with the present day technology, I would like to see more effort in in the direction.

FL: Not to get into technical terms, but I think you I think you would feel confident to know that the budding efforts in this kind of research is happening you know, trustworthy AI, explainable AI, security-motivated or aware AI. So I’m not saying we have the solution, but a lot of technologists around the world are thinking along that line and trying to make that happen.

YNH: It’s not that I want an AI that belongs to Google or to the government that I can trust. I want an AI that I’m its master. It’s serving me.

NT: And it’s powerful, it’s more powerful than my AI because otherwise my AI could manipulate your AI.

YNH: It will have the inherent advantage of knowing me very well. So it might not be able to hack you. But because it follows me around and it has access to everything I do and so forth, it gives it an edge in this specific realm of just me. So this is a kind of counterbalance to the danger that the people—

FL: But even that would have a lot of challenges in their society. Who is accountable, are you accountable for your actions or your sidekick?

YNH: This is going to be a more and more difficult question that we will have to deal with.

NT: Alright Fei-Fei, let’s go through a couple questions quickly. We often talk about top-down AI from the big companies, how should we design personal AI to help accelerate our lives and careers? The way I interpret that question is, so much of AI is being done at the big companies. If you want to have AI at a small company or personally, can you do that?

FL: So well, first of all, one of the solutions is what Yuval just said.

NT: Probably those things were built by Facebook.

FL: So first of all, it’s true, there is a lot of investment and effort and resource putting big companies in AI research and development, but it’s not that all the AI is happening there. I want to say that academia continues to play a huge role in AI’s research and development, especially in the long term exploration of AI. And what is academia? Academia is a worldwide network of individual students and professors thinking very independently and creatively about different ideas. So from that point of view, it’s a very grassroots kind of effort in AI research that continues to happen. And small businesses and independent research Institutes also have a role to play. There are a lot of publicly available data sets. It’s a global community that is very open about sharing and disseminating knowledge and technology. So yes, please, by all means, we want global participation in this.

NT: All right, here’s my favorite question. This is from anonymous, unfortunately. If I am in eighth grade, do I still need to study?

FL: As a mom, I will tell you yes. Go back to your homework.

NT:. Alright Fei-Fei, What do you want Yuval’s next book to be about?

FL: Wow, I need to think about that.

NT: Alright. Well, while you think about that, Yuval, what area of machine learning you want Fei-Fei to pursue next?

FL: The sidekick project.

YNH: Yeah, I mean, just what I said. Can we create the kind of AI which can serve individual people, and not some kind of big network? I mean, is that even possible? Or is there something about the nature of AI, which inevitably will always lead back to some kind of network effect, and winner takes all and so forth.

FL: Ok, his next book is going to be a science fiction book between you and your sidekick.

NT: Alright, one last question for Yuval, because we’ve got the top voted question. Without the belief in free will, what gets you up in the morning?

YNH: Without the belief in free will? I don’t think that’s the question … I mean, it’s very interesting, very central, it has been central in Western civilization because of some kind of basically theological mistake made thousands of years ago. But really it’s a misunderstanding of the human condition.

The real question is, how do you liberate yourself from suffering? And one of the most important steps in that direction is to get to know yourself better. For me, the biggest problem was the belief in free will, is that it makes people incurious about themselves and about what is really happening inside themselves because they basically say, “I know everything. I know why I make decisions, this is my free will.” And they identify with whatever thought or emotion pops up in their mind because this is my free will. And this makes them very incurious about what is really happening inside and what is also the deep sources of the misery in their lives. And so this is what makes me wake up in the morning, to try and understand myself better to try and understand the human condition better. And free will is just irrelevant for that.

NT: And if we lose your sidekick and get you up in the morning. Fei-Fei, 75 minutes ago, you said we weren’t gonna reach any conclusions Do you think we got somewhere?

FL: Well, we opened the dialog between the humanist and the technologist and I want to see more of that.

NT: Great. Thank you so much. Thank you, Fei Fei. Thank you, Yuval. wonderful to be here.

Fuente: www.wired.com

¿Qué es Blockchain y cómo se puede aplicar a la educación?

Con importantes violaciones de datos personales sucediendo en todo el mundo casi todos los días, la adopción de una tecnología que mantenga los registros de forma segura e inmutable, como el Blockchain, se convierte cada vez más en una necesidad. Y aunque en los últimos años el uso de esta tecnología ha ido en aumento, muchas personas aún desconocen las ventajas sociales y el potencial de esta tecnología, especialmente en educación.

El blockchain se puede aplicar en escuelas, colegios, universidades, cursos en línea, corporaciones, pasantías y muchas otras áreas del conocimiento, pero antes de entender sus aplicaciones, es vital entender qué es exactamente el blockchain.

¿Qué es blockchain?

Blockchain no es una empresa ni una aplicación; es una forma completamente nueva de documentar datos. Es una serie de datos inmutables, compartibles y con marca de tiempo, administrados por un grupo de computadoras.

La información del blockchain o cadena de bloques, se distribuye a través de libros de contabilidad, que registran la información en una comunidad. En cada bloque, todos los miembros deben validar cada transacción para que se produzcan, además, todos los involucrados poseen una copia de dicha información, por lo que es imposible alterar los datos anteriores gracias al consenso.

Los libros de contabilidad sirven como herramientas que determinan la propiedad del activo, independientemente de su tipo, en cualquier momento. Proporcionan una lista de transferencias de autoridad centralizada en cada bloque de información. La información de la cadena de bloques puede ir desde transferencias de dinero, propiedad, documentos de identidad, un acuerdo entre las partes o cualquier otra información que requiera la validación de pares para la confirmación y la responsabilidad.

Los diferentes tipos de blockchain

En términos generales, se pueden usar tres tipos diferentes de soluciones blockchain:

  1. Blockchains públicas: cualquiera puede descargar, adaptar o personalizar y realizar transacciones. Por lo general, este tipo de blockchain requiere millones de máquinas. Este tipo produce la mayor inmutabilidad y transparencia, sin embargo, también tiene la máxima ineficiencia debido a sus altos costos de almacenamiento y uso de electricidad, y baja velocidad de transacción.

  2. Blockchains privadas: funcionan solo por invitación y siguen un conjunto de reglas creadas por los que invitan. Debido al bajo número de personas involucradas, puede ser más especializado, con altibajos como menor inmutabilidad y transparencia, mayor centralización, eficiencia, volumen de transacciones y velocidad. Todo esto reduce el costo y los recursos utilizados.

  3. Consorcios blockchain: es un híbrido entre lo privado y lo público, también funciona solo por invitación o por solicitud, pero todas las personas invitadas tienen los mismos derechos de voto cuando toman decisiones por consenso.

Ventajas de la tecnología blockchain

  • Autenticidad: los usuarios pueden identificarse mientras mantienen privados todos sus datos personales y de almacenamiento.

  • Confianza: esta tecnología brinda a las personas la suficiente confianza en las operaciones para llevar a cabo transacciones como pagos o certificados.

  • Transparencia y origen: los usuarios pueden realizar transacciones sabiendo que cada parte puede participar en esa transacción.

  • Inmutable: los registros se escriben y almacenan de forma permanente sin modificaciones, lo que los hace inmutables.

  • Descentralizada: elimina la necesidad de una autoridad centralizada a cargo de las transacciones y el mantenimiento de registros.

  • Sin mediadores: los usuarios pueden realizar transacciones directamente entre ellos sin necesidad de una mediación por parte de una tercera persona.


Usos de blockchain en la educación

Certificados de seguridad permanente

Las instituciones educativas actualmente emiten certificados o diplomas en papel, a veces en un formato electrónico que utiliza una infraestructura de clave pública. Estos formatos requieren mucho tiempo y costosos de procesar, especialmente para mantenerlos y verificarlos, requieren la certificación de un tercero y también son propensos a perderse o incluso a destruirse en un almacenamiento inadecuado, un desastre natural, un conflicto o un simple error humano.

Al utilizar un Blockchain público para emitir certificados, cada usuario tiene una certificación digital firmada única, por lo que, para que sea verificada, solo tiene que compararse con la firma en el blockchain. Los certificados se almacenarán de forma segura y permanente, lo que protege al usuario en caso de que la institución que lo emitió cierre. Otro beneficio para la validación de terceros proviene de la solución del problema de emisión de la certificación de la organización educativa, ya que no necesitan más recursos para confirmar su validación a otros, ya que pueden hacerlo ellos mismos con esta tecnología.

Para que esto suceda, las instituciones necesitan un software, como Blockcerts, que permita la emisión de certificados con firmas publicadas en una cadena de bloques, así como un software de verificación para confirmarlos.

Usando blockchains para verificar la acreditación de múltiples pasos

Actualmente, cada país tiene un sistema de acreditación diferente, lo que significa que las agencias no gubernamentales o las compañías privadas acreditan a otras agencias para que puedan aprobar un certificado. Además, para una organización educativa que reconoce un diploma, deben verificar no solo el problema, sino también la calidad de la institución que emite la acreditación, lo que resulta en un proceso que requiere mucho tiempo.

Al emitir acreditaciones utilizando la tecnología blockchain, las instituciones pueden validar la credencial de la persona con un simple clic. La automatización del proceso actual facilita la visualización de la cadena de acreditación y la verificación de que la información presentada es válida y real.

Utilizando un blockchain para reconocimiento automático y transferencia de créditos

Las organizaciones educativas que usan créditos para confirmar el aprendizaje de los estudiantes pueden usar una cadena de bloques para transferir dichos créditos. Al hacerlo, la garantía de un certificado y el propio certificado se almacenarán de forma segura e inmutable. Para los estudiantes, esto significa que no necesitarán un tercero involucrado; solo necesitan dar acceso a futuros empleadores a su perfil y sus créditos totales son visibles y verificables al instante. Además, cuando un estudiante viaja al extranjero para un programa de intercambio internacional, el uso de blockchain puede ser muy beneficioso tanto para su hogar como para las instituciones receptoras.

En la actualidad, la transferencia de créditos depende de que ambas instituciones acepten reconocer los créditos de la otra parte en ciertas condiciones, lo que no puede llevar a ningún reconocimiento. Al usar blockchain, estos acuerdos pueden redactarse en contratos que, cuando ambas partes cumplen esas condiciones, la transformación de los créditos es automática.

Esto solo se puede lograr si existe un estándar para los créditos existentes, un diseño de cadena de bloques personalizado para almacenar información en conjunto y un alto número de instituciones participantes.

Usando un blockchain como pasaporte de aprendizaje permanente

Muchas de las credenciales alternativas en el mercado proporcionan a los usuarios una forma de registrar sus logros; sin embargo, muy pocos de ellos ofrecen formas de verificar la experiencia y las credenciales. Por lo tanto, terminan siendo como una caja llena de certificados en papel, pero digitales, sin beneficios o proceso eficiente de acreditación.

Al utilizar blockchain, el alumno almacena sus propios logros de cualquier fuente, formal o informal, y puede compartirlo con esta tecnología para una verificación instantánea. Esto puede reducir el fraude de CV, así como reducir la carga de trabajo para organizaciones e individuos interesados en la persona que envía ese currículum.

Para que esto suceda, se debe crear una identidad federal digital verificada, como un estándar de metadatos que puede ser utilizado por un software de reclutamiento que verifica automáticamente si la persona tiene las habilidades necesarias para el puesto.

Blockchain para rastrear la propiedad intelectual y recompensar el uso y la reutilización de esa propiedad

Debido a la complejidad del seguimiento de la propiedad intelectual, es muy complicado para los escritores de auto publicación seguir la reutilización de su propiedad intelectual, especialmente debido a su costo y al funcionamiento de la institución especializada. Además, en el caso del aprovechamiento de recursos educativos abiertos, rara vez se rastrean, y si lo hace, es a través de métricas simples con uso limitado.

Los educadores pueden usar blockchain para transmitir su publicación y registrar la referencia que usaron, lo que permite dar evidencia de la fecha de publicación para problemas de derechos de autor y la oportunidad de rastrear cualquier recurso específico que reutilice la publicación.

Si bien existen sistemas para rastrear citas, requieren intermediarios que ponen altos costos de acceso a cambio de límites en el uso y el intercambio de los artículos. Al usar blockchain, los intermediarios serán eliminados, lo que permitirá que cualquiera pueda publicar abiertamente y hacer un seguimiento del uso con precisión y sin restricciones. Además, permitirá que los profesores sean recompensados según el uso y el aprovechamiento de sus publicaciones.

Blockchain también se puede utilizar para transmitir los recursos de publicación y las monedas se pueden entregar a los profesores de acuerdo con el nivel de reutilización de sus recursos.

Recibir pagos de estudiantes a través de blockchain

Actualmente, los estudiantes pagan su educación utilizando una moneda específica, como dólares o pesos, pero también pueden aceptarse criptomonedas basadas en blockchain, como los bitcoins.

Aunque esta aplicación podría parecer lejana, podría ser muy beneficiosa ya que muchos estudiantes no siempre tienen una cuenta bancaria o tarjeta de crédito, ya sea por su edad, país, estado laboral, etc., lo que les dificulta pagar su educación. Los pagos basados en criptomonedas pueden ser una solución a este problema, el único requisito es que tanto el estudiante como la institución tengan un sistema para recibir y enviar criptomonedas.

Uso de identidades absolutas verificadas para la identificación de estudiantes dentro de instituciones educativas

Los estudiantes que asisten a una universidad o institución educativa de gran tamaño, necesitan identificarse regularmente en diferentes partes de la institución, como por ejemplo, para entrar a la biblioteca, el gimnasio, los dormitorios, etc. En estos casos, muchas personas pueden tener acceso a la información personal de un estudiante, por lo que mantener esa información segura puede ser una gran ventaja. Especialmente cuando se trata de administrar los derechos de acceso para cada empleado y que cada dispositivo está protegido contra piratería.

Al utilizar blockchain como una identidad absoluta verificada, además del estudiante, la única persona responsable de verificar la identidad del mismo tiene acceso a los datos. Además, la organización ya no tiene que gastar recursos en la administración del complejo sistema para mantener seguros los datos de un estudiante, solo necesita proteger el dispositivo o la red donde se realizó la verificación inicial.

Conclusiones

Lamentablemente, el uso de la tecnología blockchain en la educación todavía se encuentra en una etapa piloto, pero tiene el potencial de lanzar una ola de innovación en torno a los datos de los estudiantes y la organización.

Un ejemplo de esto es el Tec de Monterrey que, junto a otras ocho universidades, se encuentra trabajando en crear una infraestructura global que permita emitir, verificar y compartir certificados digitales utilizando blockchain, apoyando así el sistema educativo del futuro y fomentando la educación a lo largo de la vida.

Esta tecnología no solo puede ayudar a automatizar la validación de credenciales, portafolios y currículums, sino que también puede reducir significativamente los costos de gestión de datos de las instituciones educativas y los sistemas descentralizados. Además, también puede beneficiar a los maestros, ya que les permite hacer un seguimiento del uso de su propiedad intelectual.

Pero estos beneficios solo se pueden lograr mediante el uso abierto de blockchain, que utiliza un sistema de código abierto, estándares abiertos para datos y aplica soluciones de gestión de datos autónomas. Para lograrlo, el desarrollo y la aplicación de esta tecnología en la educación debe verse como una competencia compartida, de modo que garantice un equilibrio entre el sector privado y el interés público.

Si quieres conocer más información sobre Blockchain, descarga el nuevo Edu Trends: Credenciales Alternativas.

Fuente

Grech, A. y Camilleri, A. F. (2017) Blockchain en Educación. Inamorato dos Santos, A. (ed.) EUR 28778 EN; doi: 10.2760 / 60649

observatorio.tec.mx

Se ha hecho un corazón impreso en 3D con vasos sanguíneos utilizando tejido humano

El corazón del tamaño de un conejo se hizo a partir de las propias células y tejidos del paciente, utilizando técnicas que podrían ayudar a aumentar la tasa de trasplantes de corazón exitosos en el futuro.

Cómo funcionó: se tomó una biopsia de tejido de los pacientes y luego se separaron sus materiales. Algunas moléculas, como el colágeno y las glicoproteínas, se transformaron en un hidrogel, que se convirtió en la “tinta” de impresión. Una vez que el hidrogel se mezcló con células madre del tejido, los investigadores de la Universidad de Tel Aviv pudieron crear un corazón específico para el paciente que Incluidos los vasos sanguíneos. La idea es que es menos probable que un corazón así sea rechazado cuando se trasplanta. El estudio fue publicado en la revista Advanced Science .

Deje que fluya: hasta ahora, los investigadores solo han podido imprimir tejidos simples que carecen de vasos sanguíneos, según el Jerusalem Post .

El impacto: la enfermedad cardíaca causa una de cada cuatro muertes en los EE. UU. (Aproximadamente 610,000 personas por año) y hay una escasez de donantes de corazón para trasplantes, por lo que los corazones impresos en 3D podrían ayudar a resolver un problema importante.

Próximos pasos: el equipo en Tel Aviv está planeando cultivar los corazones impresos y luego trasplantarlos en animales, le dijeron al Post. Sin embargo, todavía estamos a muchos, muchos años de ser utilizados para los humanos.

Actualización: La versión original de esta historia no se pudo atribuir al hecho de que encontramos la afirmación de que los investigadores solo habían podido imprimir tejidos simples sin células sanguíneas. Desde entonces, se ha actualizado para explicar de dónde se originó esta reclamación. 

Esta historia apareció por primera vez en nuestro boletín diario The Download. Regístrese aquí  para recibir su dosis de las últimas noticias obligatorias del mundo de la tecnología emergente.

Fuente: www.technologyreview.com

Here’s the 8 types of Artificial Intelligence, and what you should know about them

Artificial intelligence (AI) is a broadly-used term, akin to the word manufacturing, which can cover the production of cars, cupcakes or computers. Its use as a blanket term disguises how important it is to be clear about AI’s purpose. Purpose impacts the choice of technology, how it is measured and the ethics of its application.

At its root, AI is based on different meta-level purposes. As Bernard Marr comments in Forbes, there is a need to distinguish between “the ability to replicate or imitate human thought” that has driven much AI to more recent models which “use human reasoning as a model but not an end goal”. As the overall definition of AI can vary, so does its purpose at an applied level.

From my analysis, there are at least eight types of purpose within applied conversational AI. Within each there is a primary intent as suggested in this table:

Currently, the most common purpose of conversational AI is transactional. According to a 2017 Statista report, the public views chatbots as primarily for more immediate, personalized and simple interaction with their chosen brands. This may reflect how some companies hide the transactional purpose of their chatbots in interfaces that appear to offer play, diagnosis or information to disguise in-game sales, 2-for-1 offers or travel deals. More broadly, it more likely shows that conversational AI for transactional purposes is easier to implement and monetize.

The Statista report does suggest, however, that chatbots gaining popularity in more complex areas where the purpose is diagnostic or behavioural such as healthcare, telecoms and finance. These conversational AI systems, particularly in high-risk areas like health, are more difficult to develop and monetize; they may be a longer-term game.

Deliberative choice of technology

Clarity about purpose drives deliberative choices about technology. For example, in determining how a conversational AI system is trained, the extent to which it is self-learning and whether it is necessary or advisable to keep a human in the loop. In this respect, the debate about what is “pure” conversational AI (neural networks and deep learning) versus “simple” AI (rules-based systems) only has meaning in relation to purpose. Whilst hybrid systems are likely to become more of the norm, its current state suggests that simple (more predictable) AI with a human in the loop is most relevant to behavioural and diagnostic purposes, and there is the potential to take more risk where the purpose is playful or social.

Having said that, Microsoft’s chatbot Tay demonstrated the dangers of releasing a self-learning chatbot into a public arena as it learned hate speech within hours. This serves as a helpful reminder that all current conversational AI is based in statistics and not belief systems. For this reason, some recommend keeping a human in the loop as a better technological choice. As Martin Reddy comments: “[K]eeping a human in the loop of initial dialogue generation may actually be a good thing, rather than something we must seek to eradicate (…) Natural language generation solutions must allow for input by a human ‘creative director’ able to control the tone, style and personality of the synthetic character.”

Critically, what these points emphasize, is that a conversational AI system is dynamic. Many of the commercially available chatbot builders could make this clearer. There is a notion that you build and then deploy and then tweak. The reality is that you continuously build based on conversational logs and the metrics that you deem relevant. Those metrics, I would argue, need to be driven by the purpose of the system that you create.

From bodies to benefits

I am troubled by the dominant metrics for digital that appear to have been applied to conversational AI. The mantra of “how many, how often, how long” is not fit for many purposes yet continues to dominate how success is measured. It is time, perhaps, to add a little less sale and a larger dose of evidence to discussions on metrics.

Whilst conversion and retention rates are valid measures of engagement in digital tools, they need to be contextualized in purpose. For example, is a user who engages 10 times a day a positive outcome for a tool focused on wellbeing or is it a failure of any triaging/signposting system? Perhaps, it is time to look at meaningful use as a better indicator of the value of conversational AI.

Recently we worked with Permobil, a Swedish assistive technology manufacturer, to offer power wheelchair customers conversational AI focused on elements of diagnosis and behaviour change, such as a power actuator supported exercise to reduce pressure sores. Whilst most users found the system helpful, there were different patterns of preferred usage with relatively few people wanting to use it daily. Combined with data, such as time spent in the wheelchair and the user’s underlying condition, integration of preferred usage patterns can drive algorithms for user-centric, conversational AI. The more a system is personalized to the user, the better the likely outcomes for sales and customer retention.

In my view, the long-term winners in the conversational AI space will be those who can demonstrate meaningful use and beneficial outcomes that stem directly from the purpose of their systems. Purpose-related outcomes, beyond those applied to transactional conversational AI, are the best driver of successful business models.

It is easier to track the outcomes for some purposes than others. Drift report identified 24-hour service, instant responses and answers to simple questions as the essential consumer outcomes for chatbots in the customer services field. These metrics are easier to track than outcomes for conversational AI for diagnostic or behavioural purposes which often require more rigorous scientific approaches.

Commitment to research on the benefits of conversational AI is essential to growing customer confidence and use of these tools, as well as to monetization strategies. Natural Language Understanding has an interesting role to play in outcome analytics. For example, at digital mental health service Big White Wall, it was possible to predict PHQ 9 [an instrument used for screening, diagnosing, monitoring and measuring the severity of depression] scores and GAD 7 [General Anxiety Disorder self-report questionnaire] scores with high confidence levels with just 20 non-continuous words contributed by a user.

By extrapolation, there is every possibility that changes in language (words, tone, rhythm) will become valid and routine measures of outcomes related to purpose. The notion that we may no longer need formal assessment and outcome measures but simply sensitive tools to analyse language is exciting. It also raises a host of ethical issues which underlie the extent to which conversational AI will be trusted and adopted.

Intelligence without artifice

We live in an era where technological advance has thrown down the gauntlet to ethical practice, not least in the field of conversational AI. I would argue that clarity of purpose, that is reflected in how engagement and outcomes are measured, goes a long way to ensuring a framework of ethical integrity. In that respect, here are six principles that could help shape that framework:

1. Call a bot a bot

Unless you are entering the Loebner prize, where the judges expect to be tricked into believing a bot is human, there is little justification for presenting a bot as a person.

This is a more nuanced debate than a commitment to countering false news, treating customers with integrity or being able to respond to issues where someone tells a bot they are going to, say, self-harm. It is also about choice. For example, research has shown that some people prefer to talk to a bot for improved customer service or because they can be more open.

2. Be transparent in purpose

Whilst a conversational AI system may have more than one purpose, it is important not to disguise its primary intent. If a conversational AI system appears to have a purpose of information with a hidden intent of selling, it becomes hard for a consumer to both trust the information or be inclined to buy what is on offer.

These are serious considerations for systems like Amazon’s Alexa for which Amazon’s Choice was apparently designed. Amazon states that Choice saves time and effort for items that are highly-rated and well-priced with Prime shipping. However, it is less clear whether this is factual information or whether there are more criteria at play, such as the prioritization of Amazon products.

3. Measure for purpose

As previously suggested, metrics need to be related to purpose and meaningful to customers. Increasingly, those companies and organizations that can demonstrate valid and independent evidence that their conversational AI systems make a difference in relation to their purpose will be favoured over those who bury dubious metrics in marketing speak.

4. Adapt business models

In the digital field, some business models encourage poor ethical behaviour,. Take, for example, advertising revenues that are driven by views where social media giants have deliberately created addictive devices such as infinite scrolling and likes. Aza Raskin, the creator of the infinite scroll, described this as “behavioural cocaine”. He says: “Behind every screen on your phone, there are generally a thousand engineers that have worked on this thing to try to make it maximally addicting.”

The pace of technological change, with monetization often playing catch up, can lead to unintended consequences. For example, Airbnb now has to counter ‘air management’ agencies who inflate prices and disrupt local rental communities. The same trends and temptations are in play with conversational AI. It is our job, as an AI industry, to develop business models that are based on meaningful use and purpose-led outcomes.

5. Act immediately

In the dynamic and unpredictable field that is conversational AI, ethical issues are not always foreseeable. When issues do emerge we need to show leadership and bring alive the principle of “do no harm’”. For example, manufacturer Genesis Toys could not be contacted for comment when German regulators pulled their interactive doll from the market after it was used for remote surveillance. Apparently, Cayla proved more responsible than her makers as when asked “Can I trust you?”, she responded: “I don’t know.” The industry needs to take collaborative responsibility and it is to be hoped that initiatives like the Partnership on AI will take on that role.

6. Self-regulate

A lack of self-regulation impacts an industry and not just a brand. Last year, Facebook researchers proudly presented a bot that was capable of acting deceitfully in a negotiationIn Drift’s ChatBot report, 27% of consumers would be deterred from using a chatbot if it was available only through Facebook.

To conclude, conversational AI has enormous and exciting potential to augment the human experience. However, in my view, the long-term winners in this space will be those who do not prioritize economic gain over ethical practice. The brands that gain greatest popularity will be led by pioneers of purpose who are transparent in practice, who measure meaningful outcomes and who monetize on fair rather than false premises.

Fuente: www.weforum.org

Conservative climate groups hope to seize the Green New Deal moment too

But is there a conservative approach that can really work at this stage?

  • ClearPath Executive Director Rich Powell makes the conservative case for an innovative agenda on climate change.

Climate change is having a moment.

It’s hard to untangle cause from effect, but some combination of the backlash against Trump, worsening environmental disasters, youth-driven protests, and the bold aspirations of the Green New Deal have pushed the climate conversation onto center stage.

While the debate continues over whether the sweeping economic and environmental justice proposal is good policy or politics, a number of centrist and even conservative groups pushing for action on climate also sense a shift of the Overton window here. Groups like DC think tank ClearPath see a chance to advance carbon-free energy policies that may not be so audacious, but could achieve broader support—and, in their own estimation, would be more effective in combating climate risks.

ClearPath’s executive director, Rich Powell, says the most important role the US can play is in creating carbon-free energy tools that are cheaper and better than the polluting options already on the market.

ClearPath, launched publicly in 2015, argues that the government needs to invest heavily into early research for advances in energy technologies like advanced nuclear, grid energy storage, and carbon capture systems on fossil-fuel plants, employ some light support to move them into the marketplace—and then get out of the way.

It’s far from clear that could accelerate progress at the rate now required to address growing climate risks. But in an interview with MIT Technology Review, Powell explains why he believes it’s the most promising approach on a global scale, why more Republicans are coming around on these issues than the rest of us seem to realize (and reports seem to show), and why we shouldn’t hold our breath for an apology from the GOP for decades of climate denial.

(This interview has been edited for length and clarity.)

How would you articulate ClearPath’s theory of change on climate?

If it’s unlikely, and frankly a little bit unfair, to ask the developing world to make a preference for clean over cheap, then the challenge is how do we merge clean development and economic development? How do we make it so that a choice for clean is actually just as cost effective and just as high-performing as a choice for a heavily emitting source? And that is a technology problem.

ClearPath executive director Rich Powell COURTESY OF CLEARPATH

Our view is that the first priority is this innovation challenge, and channeling as much as possible of the political capital that exists to fight climate change to that. And that it also means using the US as a test bed for these new technologies.

It sounds like that starts with more significant, more focused government R&D funding for early-stage stuff?

We think that government should be primarily focused on breakthrough energy technologies as opposed to incremental energy technologies. Let’s not spend more dollars making an existing wind turbine technology 1% more efficient. Let’s focus resources on making [wind energy kites] or floating offshore work.

I certainly agree with the broad point that innovation will play a crucial role. But I worry it can only do so much. For instance, if we are able to use R&D to drive down the cost of carbon capture, that’s great. But it’s still going to represent an incremental cost, so what would incentivize existing plants to build them?

I think the 45Q tax incentive [which provides a tax credit of $50 for every metric ton of carbon dioxide buried underground and $35 for every ton put to work in other ways] is obviously is really a good place to start. It’s a pretty rich incentive, actually, though it’s a little bit unclear whether it’s enough to incent a lot of [carbon capture and storage] retrofits. But I think it’s plenty to incent the building of new integrated CCS plants. And then I think we see how far that kind of push policy brings the cost down, and then we reevaluate.

The leading conservative approach to addressing climate change is a price on carbon. But you don’t support that, right?

Yes, just look at the name.

It’s a very difficult sell, frankly, really broadly but for conservatives in particular, since it’s called a tax. If you look at the modeling, the primary things a carbon tax does are to incent more coal-to-gas switching, some renewables, and a bunch of energy efficiency.

It focuses on making traditional energy expensive as opposed to clean energy cheap, and we think for global impact we need to be more focused on making clean energy cheap.

Doesn’t government funding for R&D mean picking winners and losers?

We would say government should only pick winners, no losers. [Laughs]

Okay. If that works out, that’s great.

I think virtually every economist, including conservative economists, would agree that public spending on R&D is a pretty clear public good. I think most folks would actually argue we have a suboptimal level of investment, at the very least in basic research.

The great US success story in decarbonization has, of course, been the shale gas revolution. We spent $500 million at the Department of Energy on basic and applied R&D, in a public-private partnership. And then we did a tax incentive, a $10 billion alternative production tax credit, which sort of brought shale gas into the market.

We’re 27% down in power-sector carbon emissions since 2005, and most of the analysis shows at least two thirds of that has been due to the shale gas revolution.

Do you think that having a conservative group advocate for certain sets of carbon-free technologies, like nuclear and hydro, is a path to common ground and compromise with liberals, who are pushing for more solar and wind? Or is there a risk that we just end up politicizing technological tools even more?

I’ll put on my partisan hat and say, “Liberals started that.” Now I’ll take it off and say, “In the last Congress, we saw a tremendous amount of bipartisan policymaking on clean energy, the most since 2007.”

We saw new legislation introduced on advanced nuclear, advanced carbon capture, including direct air capture, and advanced storage technologies. (See “One man’s two-decade quest to suck greenhouse gas out of the sky.”) And every single thing that passed was a very bipartisan bill, and signed into law by arguably quite a partisan conservative president.

I think it shows that the innovation space on all of these technologies is pretty broadly open and very bipartisan.

Can a Republican in Congress today talk openly about climate policy and still get reelected?

Increasingly, yes.

If you take a look at the rhetoric in this Congress, there’s been agreement that the problem is real, has a serious human contribution, and that there ought to be a significant focus on solutions. That has been a significant and rather rapid evolution. Republicans are just able to talk about this in a very different way than they had been before.

Do you think there has to be a broader reckoning in the Republican Party to retain intellectual credibility with their base over climate change? Are they going to have to come out and say, yes, we misled you?

It has not been my observation that politicians come out and say anything like that to their base. So I think the prospects for that are not strong.

Regardless of where the Republican Party now stands on the issue, several decades of misinformation or distortion or whatever you want to call it has certainly created a large number of people out there who do still think that climate science isn’t true. So do you think that there will have to be some kind of action or shift in the rhetoric?

In terms of directly taking on the climate issue with voters, this is kind of a low-salience issue with everybody. What we often say, and we’ve got quite a lot of polling to back up this, is that this is an issue that really will help conservative policymakers with their small-r Republicans, independents, and moderates. Increasingly in our focus groups we’re finding that even conservative Trump voters are no longer okay with just outright dismissal of climate science. I think they’ve just seen too much weirdness in the past year to say, “Oh, we shouldn’t be thinking about this as a problem.”

Now they might have rather limited views about what to do about the problem, and humans’ role in the problem. But they want their leaders to open-minded and thoughtful and proposing solutions.

Source: www.technologyreview.com

Focus on a hesitant customer

When an early potential customer is hesitant, frame it as an opportunity to solve their problems rather than an excuse to walk away. If you push past the initial obstacles, explains Zūm founder and CEO Ritu Narayan, you can end up with a much more useful and scalable product. She demonstrates how persevering with a single potential client revealed an underserved market that transformed Zūm’s business model.

Source: ecorner.stanford.edu

¿Por qué enseñar astronomía debería ser una prioridad en la educación?

La fascinación por los misterios que existen en el cosmos –en sus estrellas, satélites, planetas, galaxias, nebulosas y más– es universal. Desde hace siglos, este tema le interesa a millones de personas en todo el mundo. Tan solo en Estados Unidos, existen 450 clubes de astronomía, según la Sociedad Astronómica del Pacífico. Otro ejemplo son las miles de reacciones en las redes sociales provocadas por la primera imagen de un agujero negro, obtenida el 10 de abril de 2019.

De forma especializada, la astronomía es la ciencia que se encarga de estudiar el universo. Pero, ¿qué rol tiene esta disciplina en la educación actualmente?, ¿por qué debería fomentarse más su inclusión en los planes educativos?, y ¿qué herramientas y recursos existen para que los profesores puedan enseñar estos temas?

¿Por qué es importante que la educación conecté a los estudiantes con la astronomía?

“La astronomía debe ser parte del sistema educativo”, según el doctor en astronomía, John R. Percy. Lo dijo en su discurso para el Simposio de Unión Astronómica Internacional, en 1998. “En un contexto escolar, la astronomía demuestra un enfoque alternativo al ‘método científico’: el enfoque de la observación frente al teórico. Puede atraer a los jóvenes a estudiar ciencias e ingeniería y puede aumentar el interés público y la comprensión de la ciencia y la tecnología, esto es importante en todos los países, tanto desarrollados como en desarrollo”.

En el mismo discurso, Percy agrega que existen muchos fenómenos y problemas actuales ligados a la astronomía como las estaciones, la navegación, el cambio climático y la evolución biológica. La astronomía, explica, va más allá de la física y otras ciencias exactas. Por todas las razones anteriores, puede y debe incorporarse a la educación. En sus palabras, se trata del origen de la vida y por esto tiene la capacidad de promover la curiosidad, imaginación, el sentido de exploración y descubrimiento compartido.

Una razón más: la astronomía puede contribuir de forma significativa a la construcción de la fuerza laboral del siglo XXI. Esta idea aparece en el libro del Consejo Nacional de Investigación y el Comité de Encuestas de Astronomía y Astrofísica, Astronomy and Astrophysics in the New Millennium: Panel Reports (2002).

“Los conceptos e imágenes de astronomía tienen un atractivo universal, inspiran asombro y resuenan de manera única con las preguntas humanas sobre nuestra naturaleza y lugar en el universo. Este interés generalizado en la astronomía puede aprovecharse para aumentar el conocimiento y la comprensión por parte de los estudiantes”, se explica. “Además, la naturaleza interdisciplinaria de la astronomía y sus vínculos con la tecnología y la instrumentación posicionan al campo para contribuir significativamente a la construcción de una fuerza de trabajo técnica fuerte para el siglo XXI”.

Tendencias y recursos para enseñar astronomía en el salón de clases

Las posibilidades para incorporar temas de astronomía en el aula –de acuerdo al grado y edad de las y los estudiantes– son amplias. En línea, por ejemplo, existen opciones como cursos abiertos, libros de acceso gratuito, juegos didácticos, ejercicios y experimentos.

A continuación se presentarán algunas de las instituciones de astronomía que cuentan con recursos educativos en español (si te interesan algunos más en inglés, puedes revisarlos en esta nota)


Asociación para la Enseñanza de la Astronomía (ApEA)
Los materiales educativos de ApEA incluye recursos para educación primaria, secundaria y de bachillerato; divididos por campos como la astrofísica, cosmología o aplicaciones Informáticas. https://www.apea.es/materiales/

European Space Education Resource Office (ESERO)
Es un proyecto de Agencia Espacial Europea (ESA) para apoyar las educación de la Ciencia y Tecnología en primaria y secundaria y fomentar las vocaciones científicas, haciendo uso del contexto en el espacio. Los recursos se encuentran en inglés y en español. https://esero.es/recursos/

Sociedad Astronómica del Pacífico
En su sección “El Universo en Clase” cuentan con más de 50 artículos que pueden ser utilizados por docentes. https://astrosociety.org/edu/publications/tnl/eulc.html

ServiAstro
Forma parte del Instituto de Ciencias del Cosmos (ICCUB) y Departamento de Física Cuántica y Astrofísica de la Universidad de Barcelona (FQA). Cuenta con una sección de recursos para docentes que incluyen: cursos, libros, videos, diccionarios, aplicaciones para teléfonos móviles, entre otros.
https://serviastro.am.ub.edu/twiki/bin/view/ServiAstro/WebDescarrega#Manual_did_ctic_L_astronomia_a_l

Didactalia
Es una comunidad educativa global para profesores, padres y estudiantes desde la educación infantil hasta el bachillerato. En la sección de astronomía cuentan con una selección de recursos educativos que van desde infografías y textos hasta proyectos para alumnos.

Explora
Creado por la Comisión Nacional de Investigación Científica y Tecnológica (CONICYT) de Chile, este espacio busca crear una cultura científica y tecnológica. Entre los recursos educativos para docentes se incluyen: sitios web especializados, manuales para enseñar astronomía, cortos animados y hasta un videojuego. https://www.explora.cl/blog/2017/01/27/recursos-educativos-astronomia/

Referencias

Percy, J. R. (1998). Astronomy education: An international perspective. In International Astronomical Union Colloquium (Vol. 162, pp. 2-6). Cambridge University Press. Recuperado de:
https://www.cambridge.org/core/services/aop-cambridge-core/content/view/760F90CA2CD44A5D4C864D89B7916850/S025292110011468Xa.pdf/astronomy_education_an_international_perspective.pdf

Astronomical Society of the Pacific. (Sin fecha) Amateur Astronomers. Recuperado de: https://www.astrosociety.org/education/amateur-astronomy/

Board, S. S., National Research Council, & Astronomy and Astrophysics Survey Committee. (2002). Astronomy and Astrophysics in the New Millennium: Panel Reports. National Academies Press.

Fuente: observatorio.tec.mx