Pulse-Net: Optimised pruning and compression of Neural Network Structures
Apple’s Siri, Amazon’s Echo and Google Home are standard devices in today’s world, but how do these machines understand different people's voice commands? How does Facebook recognise a face to be tagged? Is somebody sitting at the other end of the internet waiting for us to give the devices a command or move the mouse to tag a friend? Of course not!! So how do these machines understand what we say to them, considering the difference between voices? How does Facebook identify faces when we all look different?
Neural Networks, and more specifically Deep Neural Networks (DNN), have been responsible for recent breakthroughs in the area of Artificial Intelligence including image, object and speech recognition, statistical machine translation and even the world of gaming. Modelled on the neurons in the human brain, these artificial neural networks use fundamental blocks of mathematics. These DNN’s achieve state-of-the-art accuracy on various tasks, but have millions of parameters, which can make them difficult to deploy on computationally-limited devices.
David’s research focuses on the question, “Should the structure of these Networks be static or dynamic?” He and his team have found that the network can be compressed by removing unimportant neurons. This new compressed structure can then be deployed on devices such as mobile phones.
Insight Researchers: David Browne, Steven PrestwichNon-Insight Researchers: Michael GieringOrganising Body: IEEE 5th World Forum on Internet of Things, Event: IEEE 5th World Forum on Internet of Things, Venue: LimerickDate: 18/04/2019Event Type: ConferencePresentation Type: Paper
Insight Researchers: David BrowneOrganising Body: MCCSIS, Event: International Conference on Big Data Analytics, Data Mining and Computational Intelligence, Venue: LisbonDate: 23/07/2017Event Type: ConferencePresentation Type: Paper