From 11-15 November 2019, the most important event in the history of artificial intelligence (AI) in Latin America took place in Montevideo, our hometown.
Khipu.ai was the first of hopefully many events to come, in which top researchers from all over the world, both from academy and industry, came to the region to share knowledge and promote much needed diversity in the field, pushing Latin America forward.
As proud supporters of Khipu, we from Tryolabs got to attend the awesome talks and trainings provided during the event, as well as contributed with own talks and a booth featuring four demos of pose estimation models running in real time on different embedded devices.
We have enjoyed sessions on varied topics directly from the best, such as:
- Tutorial on generative modeling by Ian Goodfellow.
- Reinforcement learning by Nando de Freitas.
- Robotics and Continuous Control by Chelsea Finn.
- Perspectives on AI by Yoshua Bengio.
- Recursive neural networks and NLP by Kyunghyun Cho.
While you can watch all talks online, this post gives an overview of the hottest topics discussed at Khipu and presents some main takeaways and learnings.
This post was written in collaboration with Maia Brenner, Gonzalo Marín, Jennifer Esteche, Alan Descoins and Leonardo Piñeyro.
1. AI for social good is on everyone’s lips
AI for social good was a cross-cutting theme along the five days of the Khipu conference and addressed by several speakers. It’s remarkable how AI is being used to address several problems worldwide in the health, education and environmental fields.
In his talks ML Challenges and Opportunities of Computational Behavioral Phenotyping in Developmental Health and AI for Social Good, Guillermo Sapiro explained how AI is being used to democratize medical access by helping identify autism indicators in early ages using smartphone’s cameras.
Moreover, Danielle Belgrave presented her research, which focuses on developing probabilistic and causal graphical modeling frameworks to understand disease progression over time.
Besides the efforts done in AI for healthcare, there’s also been great work done in AI for education. Jeff Dean commented in his talk on the Bolo app that helps kids with no access to Internet or schools to learn to read. Furthermore, Luciana Benotti and Cecilia Aguerrebere agreed that students outcomes could be predicted with ML techniques and more personalized teaching could be provided.
In another great talk, Claire Monteloni showed how Machine Learning (ML) can shed light on climate change by improving predictions of geographical movement of storms (hurricanes, cyclones, typhoons). Very much in line, Jeff Dean commented on how Google AI uses ML to improve flood forecasting and give Google users actionable alerts. Moreover, Renato Assunção presented his ML model that uses twitter data to find dengue hotspots in Brazil.
All in all, the speakers agreed with Nando de Freitas, who stated that countries in Latin America face similar challenges like poverty, security and crime, which are yet not being addressed with AI.
The overall takeaway is that there are countless opportunities in this field and yet a lot to be done.
2. Reinforcement learning keeps making progress
Reinforcement Learning (RL) was clearly one of the most referenced topics in the conference: its presence was not only high in the speaker sessions, but also there was a very large amount of posters on this subject. Particularly, three speakers gave talks on this subject.
Nando de Freitas gave a very nice introduction to RL, explaining in a detailed fashion the fundamentals and applications of RL. He also introduced several case-study from DeepMind, showing the top-notch advances in this area.
Later, Chelsea Finn went deep down the world of robotics and RL, mentioning both the incredible improvements in the last couple of years and the challenges that are yet to be solved, like the poor adaptability of the agents upon a change of environment.
Finally, Oriol Vinyals gave a great talk about AlphaStar, the latest AI developed at DeepMind that is capable of competing against the best StarCraft players in the world. Oriol made special emphasis on the importance of imitation learning to improve and accelerate the training process of the AI in the context of playing StarCraft.
Besides the excitement of the great advancements in the area, all of the aforementioned speakers expressed some level of uncertainty on the possibility of making transfer learning real in the context of RL. If such a thing happens, it would open a clearer path into creating more robust and adaptable agents, with a world of applications.
3. Research is opening new frontiers in computer vision
Computer vision (CV) has been one of the fields that has had great advancements in the last years, and could not be absent from Khipu.
Guillermo Sapiro’s keynote on AI for identification of autism traits discussed the technology of real time facial landmark recognition, that can run in any modern smartphone. He also highlighted the cooperation with other experts on different fields. Research can be done not only by improving the state of the art in different techniques, but also on applying them in clever and creative ways.
Enzo Ferrante gave an in-depth explanation of Convolutional Neural Networks, one of the main components of CV nowadays. Expanding on this talk, Juan Carlos Niebles gave a great talk on an extremely difficult topic: event detection on videos. He explained the different problems that are faced on this task, presented his work on it and showed some promising results.
There were some great spotlights, and also great posters from the attendants. One spotlight that stood out was the one given by Maria José Escobar, where she explained her work on doing CV trying to replicate the way the retina works, and showing some really interesting results. Like it has been done in the past, nature, and in particular the human body can be a great inspiration on how to solve this type of problems. By trying to understand it more and replicate we can develop new knowledge and techniques, so it’s something to always keep in mind.
A panel composed of Alvaro Soto, Maria José Escobar, Enzo Ferrante and Sandra Ávila discussed which are the new frontiers in CV. Essentially, the discussion revolved around how current techniques are memory based instead of having a real understanding, which poses a limitation. Together with this, a lot of the current datasets contain biases, which is a critical problem, especially when working on medical images.
Despite the major advancements the field has had, there is still a long way to go, and interesting research is being done throughout the continent.
4. Results from machine learning should be taken with caution
Incredible advancements have been made with the help of ML in recent years, as proven by the amazing work showcased by the speakers and posters presented during the conference. But it’s always good to stop and analyze if the obtained results are actually as good as they seem.
On her talk Machine Learning Fundamentals, Luciana Ferrer focused on several problems arising from the use of weak evaluation methodologies. She pointed out miscalibration as one of such problems, referring to the mismatch between a model’s predicted error and the actual obtained error. In addition, she showed that heavily used models (like ResNets) suffer from this problem where they have low accuracy error rates but really bad calibration, making their outputs difficult to interpret as a measure of certainty.
Moreover, Sandra Ávila showed how bias in data can lead to models with low error rates, but nonetheless meaningless results when not handled correctly.
When designing an evaluation methodology, we should be aware of the consequences that errors of the models or wrong interpretation of the results could have on the final solution, especially taking into consideration the impact it could have on the users. This was also a general topic of discussion during a workshop about the People + AI Guidebook where the main focus was on how to design AI solutions that align with the user needs.
5. Data security and privacy Yes, but with trade-offs
While both security and privacy have been at the core of AI for a while, research in these fields is gaining strong momentum and was present in different talks during Khipu.
During his talk on generative models, Ian Goodfellow mentioned that security is one of the most challenging fields in AI today. In addition, Martín Abadi gave a special keynote on the topic.
In terms of privacy, ML models often rely on sensitive training data (e.g. in healthcare or education), which models may conceivably leak. After training, the inputs provided to models for inference may contain sensitive information, too. This raises many new concerns in terms of security and integrity of the different systems, such as: the poisoning of training data, the privacy of the training data itself, the creation of adversarial examples, and also model theft.
State-of-the-art ML models have huge capacity. In terms of generalization, research has shown that many times models are actually capable of memorizing what they have seen during training, so it’s important from a security and privacy perspective to be mindful of this fact and careful in order to prevent the exposure of sensitive data.
Some of the work being carried out on this direction was exposed during Martín’s talk, particularly with approaches like differential privacy and PATE (Private Aggregation of Teacher Ensembles) applied to ML. These methods are starting to be used in different implementations, for frameworks such as TensorFlow and PyTorch. But no method comes for free: when using these approaches, some accuracy of the predictions needs to be sacrificed. There’s definitely a trade-off between security and privacy and utility, which will keep getting pushed by researchers.
6. Research done in Latin America should become more accessible
Interest in ML has grown exponentially in recent years, but this growth is not equally distributed throughout the world. In his talk, Juan Carlos Niebles shared some eye-opening statistics that show the low attendance and number of publications from countries of the southern hemisphere in some of the most important ML conferences in the world.
Khipu was a great milestone towards improving these statistics, but there is still a long way to go. This topic was addressed throughout the conference, and explicitly in the panel discussion at the opening ceremony. The discussion centered around the fact that government, academia and industry should work together to develop a good atmosphere for scientific research and collaborations between the three parties.
7. We can all work towards gender diversity in AI
Google organized a Women in AI event to unite women and discuss the importance of diversity in AI and STEM in general. There was great attendance, not only by female scientists but also by other groups of people interested in making an effort towards a more equal community.
The event counted with a prestigious group of panelists from the academia and the industry. Interesting enough, all of the members shared a similar reason why they started in AI: the trust and confidence placed in them by some professor, tutor or important person in their lives. This realization prompted a discussion on how everyone of us can make a difference in someone’s life, and we should unite to encourage students, especially women, in believing in themselves and sharing the excitement for the field.
María Simon talked about the different initiatives that Facultad de Ingeniería is doing to encourage young women to join the program, making emphasis in understanding that this is not only a genre issue, but usually an economic and social one. Moreover, Guillermo Moncecchi explained how the Uruguay Ministry of Industry created a whole new area to face this issue and ensure there is no discrimination towards minorities inside the organization.
The public was also encouraged to share their experiences and open questions, turning the event into a great opportunity to support each other and bring the community together.
The most important takeaway was that diversity in STEM is not only an issue that women should care about, as it affects everybody. We all have to make sure we are doing everything within our reach to improve the current situation.
8. Latin America counts with an amazing AI community
It is well known that the attendance rates of most ML conferences is undergoing a steep increase. In Khipu in particular, the days started early in the morning and were very intense.
To provide some time for relaxation, Tryolabs was in charge of the organization of the Unofficial Khipu After Party on Wednesday, on a rooftop with a beautiful sunset view right next to the venue where Khipu took place. The party was a huge success, and more than 300 people celebrated with us! Besides Khipu attendees and participants, we took the chance to invite several people from the local scene. This way, AI researchers could network with industry folks, local government representatives, and also other researchers or engineers who couldn’t get a spot in the conference.
Finally, Khipu ended with a closing ceremony with some great food (including local specialties), live music and beverages. Rumor has it there was — yet another — after party, that got very late ;)
The Khipus, or talking knots, are an old system historically used by a number of cultures in South America, mostly in the Andean region. They were used for storing numeric and other values encoded as knots, often in a base ten positional system.
Honoring the name, events like Khipu are contributing to generating tighter knots between AI researchers in Latin America and the world. We were super satisfied with the conference, honored to help this event take place through sponsoring, and cannot wait for it to happen again next year!
If you’re interested in upcoming AI conferences, check out our list of machine learning conferences taking place around the globe.
Like what you read?
Subscribe to our newsletter and get updates on Deep Learning, NLP, Computer Vision & Python.