The Power of Deep Learning

Fora ASTRO-FORUM OFF TOPIC The Power of Deep Learning

Tagget: ,

  • Dette emne har 0 svar og 1 stemme, og blev senest opdateret for 3 år, 8 måneder siden af Bjarne. This post has been viewed 726 times
Viser 1 indlæg (af 1 i alt)
  • Forfatter
  • #318961

    • Super Nova

    Jeg har tidligere under nyheder fra videnskaben (som regel fra tidsskriftet Science) skrevet om problemer i forbindelse med Google og andres anvendelse Deep Learning. Der er brug for en uafgængig undervisning i anvendelsen af Deep Lerning. Man har brug for flere personer, som kan modstå Googles udbredte anvendelse af ordet AI til markedsføring. Googles nye Android 9 introduceres f.eks. som Powered by AI. Jeg har fundet en master-uddannelse i anvendelsen af Big Data, som skulle gøre kandidaterne i stand til at adskille “Hype” fra realiteter og potentielle farer.

    Making neural nets uncool again is dedicated to making the power of deep learning accessible to all. Deep learning is dramatically improving medicine, education, agriculture, transport and many other fields, with the greatest potential impact in the developing world. For its full potential to be met, the technology needs to be much easier to use, more reliable, and more intuitive than it is today.

    MS in Data Science

    The Power of Deep Learning

    Deep learning has great potential for good. It is being used by students and teachers to diagnose cancer, stop deforestation of endangered rain-forests, provide better crop insurance to farmers in India (who otherwise have to take predatory loans from thugs, which have led to high suicide rates), help Urdu speakers in Pakistan, develop wearable devices for patients with Parkinson’s disease, and much more. Deep learning could address the global shortage of doctors, provide more accurate medical diagnoses, improve energy efficiency, increase farm yields, and reduce pesticide use.

    However, there is also great potential for harm. We are worried about unethical uses of data science, and about the ways that society’s racial and gender biases (summary here) are being encoded into our machine learning systems. We are concerned that an extremely homogeneous group is building technology that impacts everyone. People can’t address problems that they’re not aware of, and with more diverse practitioners, a wider variety of important societal problems will be tackled.

    We want to get deep learning into the hands of as many people as possible, from as many diverse backgrounds as possible. People with different backgrounds have different problems they’re interested in solving. The traditional approach is to start with an AI expert and then give them a problem to work on; at we want people who are knowledgeable and passionate about the problems they are working on, and we’ll teach them the deep learning they need.

    While some people worry that it’s risky for more people to have access to AI; I believe the opposite. We’ve already seen the harm wreaked by elite and exclusive companies such as Facebook, Palantir, and YouTube/Google. Getting people from a wider range of backgrounds involved can help us address these problems.

    The approach

    We began with an experiment: to see if we could teach deep learning to coders, with no math pre-requisites beyond high school math, and get them to state-of-the-art results in just 7 weeks. This was very different from other deep learning materials, many of which assume a graduate level math background, focus on theory, only work on toy problems, and don’t even include the practical tips. We didn’t even know if what we were attempting was possible, but the course has been a huge success! students have been accepted to the elite Google Brain residency, launched companies, won hackathons, invented a new fraud detection algorithm, had work featured on the HBO TV show Silicon Valley, and more, all from taking a course that has only one year of coding experience as the pre-requisite. is not just an educational resource; we also do cutting-edge research and have achieved state-of-the-art results. Our wins (and here) in Stanford’s DAWNBench competition against much better funded teams from Google and Intel were covered in the MIT Tech Review and the Verge. Jeremy’s work with Sebastian Ruder achieving state-of-the art on 6 language classification datasets was accepted by ACL and is being built upon by OpenAI. All this research is incorporated into our course, teaching students state-of-the-art techniques.


Viser 1 indlæg (af 1 i alt)
  • Du skal være logget ind for at svare på dette indlæg.