Members

Executive:

Members:

External Advisory Committee:


Aijun An

Affiliation: Professor, EECS
Role: Executive Committee

Research Program Summary: A wide range of research activities have been undertaken in my team, which include the development of innovative machine learning algorithms and applications of machine learning to real-world problems. In the past year, proposed a novel neural network architecture (called Warped Residual Network, or WarpNet) that trains multiple layers of a deep residual network in parallel through model parallelism. The WarpNet can achieve a 45% speed- up over the original residual network, while maintaining the same predictive accuracy. A provisional patent on this work was filed in January 2018. On the applied side, we developed a number of applications of machine learning and natural language processing techniques to real world problems with our industry partners. For example, in our collaborate project with The Globe and Mail, we developed an adaptive paywall mechanism for news media. Unlike the traditional paywall model that allows a user to see a fixed number of articles and then directs them to the subscription page, our adaptive paywall mechanism considers the history of the user browsing behaviour and the utility and cost of the articles that the user has visited and will likely visit. We formulate the paywall problem as a sequential decision problem that makes decisions by optimizing the ratio of their aggregated utility of the articles presented to the user to their aggregated cost

Research Highlights:

  • R. Fok, A. An, Z. Rashidi and X. Wang, Decoupling the Layers in Residual Networks, Proceedings of the Sixth International Conference on Learning Representations (ICLR’18), Vancouver, Canada, April 30 – May 3, 2018.
  • Wai Kit Ricky Fork, Aijun An and Xiaogang Wang, Warped Residual Neural Network Architecture and System and Method for Training a Residual Neural Network, U.S. Patent, 160-006USPR. M. Zihayat, Y. Chen and A. An, Memory-adaptive high utility sequential pattern mining over data streams, Machine Learning, Vol. 106, No. 6, June 2017, pp 799-836.
  • R. Fok, A. An and X. Wang, Geodesic and Conuour Optimization Using Conformal Mapping, Journal of Global Optimization, Vol. 69, No. 1, September 2017, pp 23-44.
  • Y.Chen, M. L. Yann, H. Davoudi, J. Choi, A. An and Z. Mei, Contrast Pattern based Collaborative Behavior Recommendation System for Life Improvement, Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD’17) , Jeju, South Korea, May 23- 26, 2017, pp 106-118.


Robert Allison

Affiliation: Professor, EECS
Role: Member

Research Program Summary: My interdisciplinary research program focuses on the interface between engineering and human psychology. I am interested in how people use vision to interact with the 3D world around us, both natural and synthetic, and irrespective of whether we act directly or through machines. Specifically, my basic research program investigates the visual perception of depth and self motion and the role of vision in the guidance and control of movement through the world. My areas of expertise include stereoscopic displays, the psychophysical and computational investigation of depth perception, analysis of eye movements, and perceptual issues in virtual-reality and other display systems.
My research enables the development of effective technology for advanced virtual reality and augmented reality and for the design of stereoscopic displays. My team has had success in applying our research, particularly in the domains of 3D film, 3D games, optometry, forestry, aviation, security, and rehabilitation. I am currently working with leaders in industrial and government research bodies on improving the state-of-the-art in advanced displays and simulations.

Research Highlights:

  • Vinnikov, M., Allison, R. S., & Fernandes, S. (2017). Gaze-contingent Auditory Displays for Improved Spatial Attention. ACM TOCHI, 24(3), 19.1-19.38.
  • Allison, R. S., Johnston, J. M., Craig, G., & Jennings, S. (2016). Airborne optical and thermal remote sensing for wildfire detection and monitoring. Sensors, 16(8), 1310.
  • Vinnikov, M., Allison, R. S., & Fernandes, S. (2016). Impact of Depth of Field Simulation on Visual Fatigue: Who are Impacted? and How? Int. Journal of Human-Computer Studies, 91, 37–51.
  • Sakano, Y. & Allison, R. S. (2014). Aftereffect of motion-in-depth based on binocular cues: effects of adaptation duration, interocular correlation and temporal correlation. Journal of Vision, 14(8), article 21, 1–14. doi:10.1167/14.8.21
  • Rushton, S., & Allison, R. (2013). Biologically-inspired heuristics for human-like walking trajectories toward targets and around obstacles. Displays, 34(2), 105–113. doi:10.
  • Zhao, J., Allison, R. S., Vinnikov, M., & Jennings, S. (2017). Estimating the Motion-to- Photon Latency in Head Mounted Displays. In IEEE Virtual Reality 2017 (pp. 313–314).
  • Tsirlin, I., Wilcox, L. M., & Allison, R. S. (2014). A computational theory of da Vinci stereopsis. Journal of Vision, 14(7), article 5


Melanie Baljko

Affiliation: Assoc. Professor, EECS
Role: Executive Committee

Research Program Summary: Baljko’s research activities sit at the intersection of Human- Computer Interaction, Critical Disability Studies, and Science and Technology Studies. Her work focuses on the improved design of assistive and rehabilitation technologies through the use of critical technical practice, meaning practices of technology-building that incorporate a critical and cultural perspective. One stream of these research activities focuses on the design and evaluation of systems that provided computer-supported speech and language therapy. This work looks to understand and to improve the interrelationship between clinical efficacy and the system’s usability and other aspects of the human-computer interaction. Another stream of these research activities focuses on the design of improved assistive technologies (ATs), particularly through the use of techniques of personal-scale fabrication, digital electronics design and open- source software. This work focuses on the development of Do-It-Yourself Assistive Technologies (DIY-ATs) using participatory methods.

Research Highlights:

  • F. Hamidi, I. Gomez, and M. Baljko, “Using Participatory Design with Proxies with Children with Limited Communication,” in ASSETS’17: Proceedings of the 19th international ACM SIGACCESS Conference on Computers & Accessibility, 2017, pp. 250–259.
  • B. Haworth, E. Kearney, P. Faloutsos, M. Baljko & Y. Yunusova (2018) Electromagnetic articulography (EMA) for real-time feedback application: computational techniques, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, DOI: 10.1080/21681163.2018.1434423
  • Y. Yunusova, E. Kearney, M. Kulkarni, M. B. Haworth, M. Baljko, and P. Faloutsos, “Game-based augmented visual feedback for enlarging speech movements in Parkinson’s disease,” Journal of Speech, Language, and Hearing Research, vol. 60, pp. 1818–1825, Jun. 2017.
  • F. Hamidi and M. Baljko, “Engaging Children Using a Digital Living Media System,” in Proceedings of the 2017 Conference on Designing Interactive Systems, Edinburgh, Scotland, 2017, pp. 711– 723.
  • F. Hamidi, P. M. Owuor, M. Hynie, M. Baljko, and S. McGarth, “Potentials of Digital Assistive Technology and Special Education in Kenya,” in Sustainable ICT Adoption and Integration for Socio-Economic Development, C. K. Ayo and V. Mbarika, Eds. Hershey, PA: IGI Global, 2017, pp. 125–151.


Michael Brown

Affiliation: Professor, EECS
Role: Member

Research Program Summary: My research program aims to study, design, and develop algorithms that can be used to understand the physical world from commodity cameras. My research has two main areas of interest. The first is to investigate image formation models that describe to how incoming light (i.e. physical scene irradiance) is converted to camera sensor responses under various imaging scenarios. The second is to study how commodity cameras convert these raw sensor responses to the final output image, and how this process can be improved to represent the physical scene more faithfully. This second focus is of particular importance as cameras have a number of processing steps (collectively known as the in-camera processing pipeline) that modifies the raw sensor image to produce visually pleasing outputs. Over the last five years, my research has demonstrated how this “photo-centric” camera design is detrimental to many low-level computer vision algorithms that attempt to link the image back to the physical environment. To address this problem, my research group has proposed state-of- the-art radiometric calibration methods that can accurately model the in-camera photo-finishing routines. Moreover, my students, collaborators, and I have examined how various components in the camera processing pipeline can be improved to provide more accurate reproductions of imaged scenes.

Research Highlights:

  • Nguyen R., Price B., Cohen S., Brown M. S. (2017) “Group-Theme Recoloring for Multi-Image Color Consistency”, Computer Graphics Forum (Proc. Pacific Graphics), 36(7), Oct 2017
  • Nguyen R., Brown M. S. (2017) “RAW Image Reconstruction using a Self-Contained sRGB-JPEG Image with Small Memory Overhead”, International Journal of Computer Vision (IJCV), 2017
  • Li Y., Tan R., Gou X., Lu J.B., Brown M. S. (2017) “Single Image Rain Streak Decomposition Using Layer Priors”, IEEE Transactions on Image Processing (T-IP), 26(8), Aug 2017
  • Zhu L., Fu C.-W., Brown M.S. (2017) “A Non-Local Low-Rank Framework for Ultrasound Speckle Reduction”, IEEE Computer Vision and Pattern Recognition (CVPR’17), July 2017
  • Nguyen R., Brown M. S. (2017) “Forget Luminance Conversion and Do Something Better”, IEEE Computer Vision and Pattern Recognition (CVPR’17), July 2017


Markus Brubaker

Affiliation: Assistant Professor, EECS
Role: Member

Research Program Summary: I am interested in building rich, detailed models of the world which capture fundamental relationships between the world and our observations of it. Such models ultimately enable us to measure and predict sometimes surprising details. Most recently I have been focusing on the problem of estimating the 3D structure of biological molecules such as proteins and viruses with Cryo-EM. However, I have also worked on vehicle localization for robotics, physically realistic models of human motion, probabilistic programming languages, Bayesian methods, MCMC and forensic ballistics.

Research Highlights:

  • Duez, P., Weller, T., Brubaker, M., Hockensmith, R. E., & Lilien, R. (2017). Development and validation of a virtual examination tool for firearm forensics. Journal of forensic sciences.
  • Ma, W. C., Wang, S., Brubaker, M. A., Fidler, S., & Urtasun, R. (2017, May). Find your way by observing the sun and other semantic cues. In Robotics and Automation (ICRA), 2017 IEEE International Conference on (pp. 6292-6299). IEEE.
  • Tesfaldet, M., Brubaker, M. A., & Derpanis, K. G. (2017). Two-stream convolutional networks for dynamic texture synthesis. arXiv preprint arXiv:1706.06982.


Vic DiCiccio

Affiliation: Director, Institute for Computer Research, University of Waterloo
Role: Advisory Committee


Sven Dickinson

Affiliation: Professor and Chair, Dept. of Computer Science, University of Toronto
Role: Advisory Committee


Andrew Eckford

Affiliation: Assoc. Professor, EECS
Role: Member

Research Program Summary: Andrew’s research is concerned with information theory, molecular communication, and computational biology. Focusing on biological and biologically-inspired communication, Andrew obtains mathematical models for these communication systems, and determines their overall capacity for carrying reliable information. In addition to analyzing natural communication, he investigates ways to design new communication systems that use chemical rather than electromagnetic principles; this may be used to create “nano-networks”, enabling advanced nanotechnological applications; as well as related signal processing problems, such as optimal detection of molecular signals and minimal distortion of optogenetically-generated neural spike trains. Andrew’s work has been covered in media including The Wall Street Journal, The Economist, and IEEE Spectrum. More information about his research can be found on his lab web page: http://eckfordlab.org/.

Research Highlights:

  • A. Noel, D. Makrakis, and A. W. Eckford, “Distortion distribution of neural spike train sequence matching with optogenetics,” to appear in IEEE Transactions on Biomedical Engineering. arXiv:1708.06641
  • H. Awan, R. S. Adve, N. Wallbridge, C. Plummer, and A. W. Eckford, “Characterizing information propagation in plants,” to appear in Proc. IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, UAE, 2018. arXiv:1805.06336
  • A. W. Eckford, B. Kuznets-Speck, M. Hinczewski, and P. J. Thomas, “Thermodynamic properties of molecular communication,” in Proc. IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 2018.
  • Y. Fang, A. Noel, N. Yang, A. W. Eckford, and R. A. Kennedy, “Convex optimization of distributed cooperative detection in multi-receiver molecular communication,” IEEE Transactions on Molecular, Biological, and Multi-Scale Communication, vol. 3, no. 3, pp. 166-182, Sep. 2017.
  • P. J. Thomas and A. W. Eckford, “Capacity of a simple intercellular signal transduction channel,” IEEE Transactions on Information Theory, vol. 62, no. 12, pp. 7358-7382, Dec. 2016.


Jeff Edmonds

Affiliation: Professor, EECS
Role: Member

Research Program Summary: I consider theoretical problems in three important areas of computer science areas: Scheduling, Lower Bounds and Randomness. In the Scheduling area, most broadly, this area of research considers the scheduling to a steady stream of incoming jobs, a shared resource such as processing time, battery power, or internet bandwidth. The problem is to determine how to best allocate and reallocate these resources to the jobs as they arrive and complete in a way that minimizes some global quality of service measure. Lower Bounds problems have both practical and theoretical importance and we would like to know the minimum amount of time (or space) needed to solve a given computational problem on an input of size n. An upper bound provides an algorithm that achieves some time bound. A lower bound proves that no algorithm correctly solves the problem faster no matter how clever. Finally, Randomness issues have applications in reliability analysis and probabilistic verification of interface systems.

Research Highlights:

  • M. Solbach, S. Voland, J. Edmonds, and J.Tsotsos, “Random Polyhedral Scenes: An Image Generator for Active Vision System Experiments,” arXiv:1803.10100, 2018
  • J. Edmonds, V. Medabalimi, and T. Pitassi, “Hardness of Function Composition for Semantic Read once Branching Programs,” Computational Complexity Conference, 2018.
  • K. Hossain, S. Datta, I. Hossain, and J. Edmonds, “ResVMAC: A Novel Medium Access Control Protocol for Vehicular Ad hoc Networks,” Procedia Computer Science, 109, p.~432-439, 2017.
  • S. Dobreva, J. Edmonds, D. Komm, R. Kralovic, R. Kralovic, S. Krug, and T. Momke, “Improved Analysis of the Online Set Cover Problem with Advice”, Journal of Theoretic Computer Science, 2017.


Petros Faloutsos

Affiliation: Professor, EECS & Affiliate Scientist, Toronto Rehabilitation Institute
Role: Member

Research Program Summary: Our project on crowd analytics and visualization focuses on using static and dynamic techniques to provide designers with the ability to evaluate a space for the purposes of visibility, accessibility and ease of navigation. Static measures, inspired from space syntax, characterize the way a space is perceived and potentially used by the intended inhabitants. Dynamic measures such as crowd flow, which are derived from crowd simulations, provide an alternative way of characterizing how virtual humans move and occupy a space. Both types of measures must be computed efficiently if they are going to be used in real-time, user in the loop systems, to help designers inform their creative process. One of our ongoing projects is to use high end GPUs to compute these measures repeatedly and in real-time for complex environments, as part of iterative design processes. Our aim to test the resulting system with architectural firm in Toronto. Towards this goal, we have established collaborations architectural firms in the GTA.

Research Highlights:

  • “Sentence-Level Movements in Parkinson’s Disease: Loud, Clear, and Slow Speech”, E. Kearney, Y. Yunusova, M. Kulkarni, B. Haworth, M. Baljko, P. Faloutsos, Journal of Speech, Language, and Hearing Research, 60(12), pp. 3426-3440, 2017, doi: 10.1044/2017\_JSLHR-S-17-0075.
  • “Code: Crowd- optimized design of environments”. Brandon Haworth, Muhammad Usman, Glen Berseth, Mahyar Khayatkhoei, Mubbasir Kapadia, and Petros Faloutsos, Computer Animation and Virtual Worlds, pages e1749n/a, 2017. e1749 cav.1749.
  • On density-flow relationships during crowd evacuation, Brandon Haworth, Muhammad Usman,Glen Berseth, Mahyar Khayatkhoei, Mubbasir Kapadia, and Petros Faloutsos, In: Computer Animation and Virtual Worlds 28.3-4 (2017). e1783 cav.1783.
  • “Game-based augmented visual feedback for enlarging speech movements in Parkinson’s disease”, Y. Yunusova, E. Kearney, M. Kulkarni, B. Haworth, M. Baljko, P. Faloutsos, Journal of Speech Language Hearing Research, Special Issue on Motor Speech Conference, Volume 60, June, 2017, pp. 1818 – 1825.


Michael Jenkin

Affiliation: Professor, EECS and Director York Centre for Field Robotics
Role: Member


Research Program Summary:
Michael Jenkin works in in the fields of visually guided autonomous robots and virtual reality, he has published over 150 research papers, including co- authoring Computational Principles of Mobile Robotics with Gregory Dudek and a series of co- edited books on human and machine vision with Laurence Harris. Current research interests include: work on sensing strategies for AQUA, an amphibious autonomous robot being developed as a collaboration between McGill University and York University; the development of tools and techniques to support 3d scene reconstruction; and the understanding of the perception of self- motion and orientation in unusual environments including microgravity. He is currently a key member of the CSA VECTION project which will deploy experiments on the International Space Station starting in summer 2018.

Research Highlights:

  • Harris, L. R., Jenkin, M., Jenkin, H., Zacher, J. E. and Dyde, R. T. The effect of long-term exposure to microgravity on the perception of upright. Nature Microgravity, 3:3, 2017.
  • Codd-Downey, R. and Jenkin, M. On the utility of additional sensors in aquatic simultaneous localization and mapping. Proc. IEEE ICRA 2017, Singapore, 2017.
  • Hoveidar-Sefid, M. and Jenkin, M. Autonomous trail following. Proc. ICINCO 2017, Madrid, Spain. 2017.
  • Codd-Downey, R., Jenkin, M. and Allison, K. Milton: An open hardware underwater autonomous vehicle. Proc. IEEE ICIA 2017, Macau, 2017.
  • Nguyen, M., Quevedo-Uribe, A., Kapralos, B., Jenkin, M., Kanev, M. and Jaimes, N. An experimental training support framework for eye fundus examination skill development.Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, doi=10.1080/21681163.2017.1376708, 2017.


Kelly Lyons

Affiliation: Professor, Information Science, University of Toronto
Role: Advisory Committee


Zhen Ming (Jack) Jiang

Affiliation: Assistant Professor, EECS
Role: Member

Research Program Summary: Dr. Zhen Ming (Jack) Jiang’s research interests lie within Software Engineering and Computer Systems, with special interests in software performance engineering, mining software repositories, source code analysis, software architectural recovery, software visualizations and debugging and monitoring of distributed systems. Some of his research results are already adopted and used in practice on a daily basis. He is the co-founder and co-organizer of the annually held International Workshop on Large-Scale Testing (LT). He is also the recipient of several best paper awards including ICST 2016, ICSE 2015 (SEIP track), ICSE 2013, WCRE 2011 and MSR 2009 (challenge track).

Research Highlights:

  • Tse-Hsun Chen, Weiyi Shang, Zhen Ming (Jack) Jiang, Ahmed E. Hassan, Mohamed Nasser, and Parminder Flora. Finding and Evaluating the Performance Impact of Redundant Data Access for Applications that are Developed Using Object-Relational Mapping Frameworks. IEEE Transactions on Software Engineering (TSE).
  • Boyuan Chen and Zhen Ming (Jack) Jiang. Characterizing and Detecting Anti- patterns in the Logging Code. In Proceedings of the 39th International Conference on Software Engineering (ICSE).
  • Tse-Hsun Chen, Mark D. Syer, Weiyi Shang, Zhen Ming (Jack) Jiang, Ahmed E. Hassan, Mohamed Nasser and Parminder Flora. Analytics-Driven Load Testing: An Industrial Experience Report on Load Testing of Large-Scale Systems. In the Companion Proceedings of the 39th International Conference on Software Engineering (ICSE), Software Engineering in Practice (SEIP) track.
  • Shaiful Chowdhury, Silvia Di Nardo, Abram Hindle, and Zhen Ming (Jack) Jiang. An Exploratory Study on Assessing the Energy Impact of Logging on Android Applications. Empirical Software Engineering (EMSE).
  • Ruoyu Gao, and Zhen Ming (Jack) Jiang. An Exploratory Study on Assessing the Impact of Environment Variations on the Results of Load Tests. In Proceedings of the 14th International Conference on Mining Software Repositories (MSR).


Yves Lesperance

Affiliation: Associate Professor, EECS
Role: Member

Research Program Summary: My research is in the area of AI and knowledge representation and reasoning, especially reasoning about action and autonomous agents and multi-agent systems. My recent work focuses on the development of languages, algorithms, and tools for verifying and synthesizing agents and multiagent systems that satisfy given specifications. Verification is important because multiagent systems often have emergent properties that are hard to predict and users want guarantees before they will trust such systems. Agents are often used to implement adaptive systems that can monitor and repair themselves. They are difficult to design and debug by hand. Automated synthesis techniques can be used to support configuration and adaptation. The focus is not primarily on planning, where one synthesizes a plan to achieve a goal from a set of primitive actions. Instead, we consider problems such as customization/supervision, where we already have an agent or system and we want to constrain its behaviour to meet a set of specifications, and composition/orchestration, where we have a set of available agents and we want to compose them to obtain a new system that meets satisfies some temporally extended goal.

Research Highlights:

  • Banihashemi, B., De Giacomo, G., and Lespérance, Y., Abstraction in Situation Calculus Action Theories. Proc. of the 31th AAAI Conference on Artificial Intelligence, 1048–1055, San Francisco, CA, USA, February, 2017.
  • Banihashemi, B., De Giacomo, G., and Lespérance, Y., Hierarchical Agent Supervision. To appear in Proc. of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018), Stockholm, Sweden, July 1015, 2018, IFAAMAS.
  • Banihashemi, B., De Giacomo, G., and Lespérance, Y., Abstraction of Agents Executing Online and their Abilities in the Situation Calculus. To appear in Proc. of the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence, Stockholm, Sweden, July 13-19, 2018.
  • Marella, A. and Lesperance, Y., A Planning Approach to the Automated Synthesis of Template- Based Process Models. Service Oriented Computing and Applications, 11(4), 367–392, 2017.


Marin Litoiu

Affiliation: Assoc. Professor, School of Information Technology
Role: Executive Committee

Research Program Summary: In 2017-2018, Litoiu’s research team focused mainly on performance modelling, estimation and optimization for software systems. His team proposed and develop a control theory based autonomic manager for managing the performance of cloud applications [1]. This approach improves the assurances of software systems and enable the system developers to systematically design a performance controller that achieves the runtime performance requirements. In [2], his team showed how controlling delays in application scenarios can improve the overall response times. Delays can be manipulated by controlling the shared bandwidth of the application. The techniques developed in [1] and [2] can be used to deliver elastic containerized applications as shown in [4] where the authored illustrated how IoT performance can be adjusted at runtime using autonomic controllers.

Research Highlights:

  • 1. C. Barna, M. Fokaefs, M. Shtern, M. Litoiu, “Runtime Performance Management for Cloud Applications with Adaptive Controllers,” ACM International Conference on Performance Engineering (ICPE), April 2018.
  • 2. N. Beigi-Mohammadi, M., Shtern, M. Litoiu, “A Model-based Application Autonomic Manager with Fine Granular Bandwidth Control, “IEEE International Conference on Network and Service Management (CNSM), Tokyo, Japan, Nov. 2017.
  • 3. B. Ramprasad, McArthur J, M. Fokaefs, C. Barna, M. Damm, M. Litoiu (2018). “Leveraging Existing Sensor Networks as IoT Devices for Smart Buildings,” IEEE World Forum on Internet of Things, Singapore, Jan 2018.
  • 4. M. Fokaefs, C. Barna, M. Litoiu, “Delivering Elastic Containerized Cloud Applications to Enable DevOps,” IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), May 2017.


Manos Papagelis

Affiliation: Assistant Professor, EECS
Role: Member

Research Program Summary: Papagelis research interests include data mining, machine learning, graph mining, network science, big data, and knowledge discovery. The emphasis of his research is on theoretical foundations, novel models and algorithms that can provide fast and accurate solutions to complex computational problems and can support decision making in application domains as broad as science, business, health, sociology and engineering.

Research Highlights:

  • Tensor Methods for Group Pattern Discovery of Pedestrian Trajectories. Abdullah Sawas, Abdullah Abuolaim, Mahmoud Afifi, and Manos Papagelis. 19th IEEE International Conference on Mobile Data Management (IEEE MDM’18).
  • Trajectolizer: Interactive Analysis and Exploration of Trajectory Group Dynamics. Abdullah Sawas, Abdullah Abuolaim, Mahmoud Afifi, and Manos Papagelis. 19th IEEE International Conference on Mobile Data Management (IEEE MDM’18 Demos).
  • BIM-based collaborative design and socio-technical analytics of green buildings. El-Diraby, Tamer, Thomas Krijnen, and Manos Papagelis. Automation in Construction 82 (2017): 59-74.
  • Learning Emotion-enriched Word Representations. Ameeta Agrawal, Manos Papagelis, Aijun An. The 27th International Conference on Computational Linguistics (COLING’18, under review).
  • Scene Classification in Indoor Environments for Robots using Context Based Word Embeddings. Bao Xin Chen, Raghavender Sahdev, Dekun Wu, Xing Zhao, Manos Papagelis and John K. Tsotsos. IEEE International Conference on Robotics and Automation (IEEE ICRA’18 MRP Workshop, to appear).


Ali Sadeghi-Naini

Affiliation: Assistant Professor, EECS
Role: Member

Research Program Summary: The focus of our research program is to develop quantitative imaging and biomarker technologies integrated with emerging machine learning and computational modeling techniques for precision medicine and personalized therapeutics. Specifically, we develop smart quantitative imaging technologies to detect and characterize cancer, to facilitate precise cancer-targeting interventions, and to predict/monitor cancer response to treatment.

Research Highlights:

  • A. Sadeghi-Naini, H. Suraweera, W. T. Tran, F. Hadizad, G. Bruni, R. Fallah Rastegar, B. Curpen, G. J. Czarnota. Breast-lesion characterization using textural features of quantitative ultrasound parametric maps. Nature Scientific Reports. 2017; 7:13638.
  • A. Sadeghi-Naini, L. Sannachi, H. Tadayyon, W. T. Tran, E. Slodkowska, M. E. Trudeau, S. Gandhi, K. Pritchard, M. C. Kolios, G. J. Czarnota. Chemotherapy-response monitoring of breast cancer patients using quantitative ultrasound-based intra-tumour heterogeneities. Nature Scientific Reports. 2017; 7:10352.
  • S. R. Mousavi, H. Rivaz, G. J. Czarnota, A. Samani, Ali Sadeghi-Naini. Ultrasound elastography of the prostate using an unconstrained modulus reconstruction technique: a pilot clinical study. Translational Oncology. 2017; 10(5):744-751.
  • W. T. Tran, M. J. Gangeh, L. Sannachi, L. Chin, E. Watkins, S. G. Bruni, R. Fallah Rastegar, B. Curpen, M. E. Trudeau, S. Gandhi, M. Yaffe, E. Slodkowska, C. Childs, A. Sadeghi-Naini, G. J. Czarnota. Predicting breast cancer response to neoadjuvant chemotherapy using pretreatment diffuse optical spectroscopic texture analysis. British Journal of Cancer. 2017; 116(10):1329-1339.
  • H. Tadayyon, L. Sannachi, M. J. Gangeh, C. Kim, S. Ghandi, M. E. Trudeau, K. Pritchard, W. T. Tran, E. Slodkowska, A. Sadeghi-Naini, G. J. Czarnota. A priori prediction of neoadjuvant chemotherapy response and survival in breast cancer patients using quantitative ultrasound. Nature Scientific Reports. 2017; 7:45733.


Zbigniew Stachniak

Affiliation: Associate Professor, EECS
Role: Member

Research Program Summary: Research is conducted in two areas: Artificial Intelligence (AI) and History of Computing (HC). In AI, I’m interested in theoretical foundations of automated reasoning and propositional satisfiability. My research in these areas is focused on non-clausal and circuit-level methods. In HC, my current research activities are centered on the history of computing in Canada, history of software, software preservation, and hardware emulation. These activities involve archival research, software emulation of hardware, and software recovery and preservation. I also curate York University Computer Museum.

Research Highlights:

  • Stachniak, Z. “MCM on Personal Software”, IEEE Annals of the History of Computing (2017).
  • Stachniak, Z. “Programmable calculators and the raise of personal computing in the 1970s”, British Society for the History of Science Annual Meeting, University of York (2017).
  • Stachniak, Z. “Software Recovery and Beyond: The MCM/70 Case”, to be published (2018).


John K. Tsotsos

FRSC DRP CRC
Affiliation: Professor, EECS
Role: Director

Research Program Summary: Research focuses on two goals: to further understanding of how vision is the primary sense that guides human behavior, and to use this understanding to build active agents that purposely behave in real environments. We employ the full spectrum of computational methods as well as human experimental studies. Our model of visual attention will be extended to support visual reasoning and task execution in dynamic environments. These require interactions with memory, control, sensor sub-systems, and joint attention for interactions with other agents. Relevance will be demonstrated with applications to autonomous driving and companion robots for the elderly and in manufacturing.

Research Highlights:

  • Avella-Gonzalez, O.J., Tsotsos, J.K., ST-neuron dynamics explain short and long-term effects on the firing rate during attentional states, Frontiers in Neuroscience – Perception Science, 12, 123, 2018.
  • Rasouli, A., Kotseruba, I., Tsotsos, J.K., Understanding Pedestrian Behavior in Complex Traffic Scenes, IEEE Transactions on Intelligent Vehicles, vol. PP, no. 99, pp. 1-1.doi: 10.1109/TIV.2017.2788193
  • Bartsch, M.V, Loewe, K., Merkel, C., Heinze, H.-J., Schoenfeld, M. A., Tsotsos, J.K., Hopf, J.-M., Attention to color sharpens neural population tuning via feedback processing in the human visual cortex hierarchy, J. Neuroscience 25 October 2017, 37 (43) 10346-10357.
  • Tsotsos, J.K., Complexity Level Analysis Revisited: What Can 30 Years of Hindsight Tell Us About How the Brain Might Represent Visual Information?, Frontiers in Psychology – Cognition, doi: 10.3389/fpsyg.2017.01216 (July 3, 2017)
  • Bajcsy, R., Aloimonos, Y. & Tsotsos, J.K. Revisiting Active Perception, Autonomous Robots 42(2), 177-196, doi:10.1007/s10514-017-9615-3, 2018


Vassilios Tzerpos

Affiliation: Assoc. Professor, EECS
Role: Member

Research Program Summary: My research interests are in the area of deep learning applications for audio and music. Research projects in the APTLY lab that I direct include emotion/mental state detection in audio recordings, as well as evaluating music mastering quality. I am also interested in music classification, music transcription, as well as music generation. I am also the director of the LaSSoftE lab that develops software for social causes. Our current project involves developing software that allows educators to develop scenarios for a hardware device that teaches visually-impaired children to read Braille. We are always looking for more projects that fit the mission statement of the lab.

Research Highlights:

  • “Transfer Learning in Neural Networks: An Experience Report”, Mark Shtern, Rabia Ejaz, and Vassilios Tzerpos, Proceedings of the 27th Annual International Conference on Computer Science and Software Engineering, November 2017, pp. 201-210.
  • “A Model for Analysis and Presentation of Design Pattern Detection Results”, Shouzheng Yang and Vassilios Tzerpos, Proceedings of the 33rd Annual ACM Symposium on Applied Computing, April 2018.
  • Proposed and seen through to successful approval of the Industry Partnership Stream in Computer Science. Established the first partnership with Shopify.
  • The development and alpha release of an authoring app that allows the development of scenarios for the Treasure Braille Box device. The public release of the app will enable user studies on the effectiveness of the device to be conducted: https://github.com/biltzerpos/TBB/ The Treasure Braille Box project was a joint project with the Classy Cyborgs non-profit (project led by Prof. Baljko)


Richard Wildes

Affiliation: Associate Professor, EECS & Chair, EECS & Associate Director, VISTA
Role: Member

Research Program Summary: Wildes research interests are in computational vision, especially video understanding and machine vision applications, as well as artificial intelligence. Current projects include development of analytically defined convolutional networks for video understanding, video-based action recognition, video prediction and image guided surgery systems.

Research Highlights:

  • I. Hadji and R. P. Wildes, What do we understand about convolutional networks?, arXiv e-print arXiv:1803:08834v1, 2018.
  • I. Hadji and R. P. Wildes, A spatiotemporal oriented energy network for dynamic texture recognition, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017.
  • C. Feichtenhofer, A. Pinz and R. P. Wildes, Spatiotemporal multiplier networks for video action recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • C. Feichtenhofer, A. Pinz and R. P. Wildes, Temporal residual networks for dynamic scene recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • Industrial Collaborations: a) with MDA Corporation/MAXOR (Brampton) and Memorial Sloan Kettering Cancer Center (NY,NY) to develop a machine vision stereo system for deployment during medical surgery; b) with Huawei (Canada) to develop video understanding systems; c) Scientific Advisor to Huawei Noah’s Arc Lab – Canada.


Jianhong Wu

DRP CRC
Affiliation: Professor, Mathematics & Statistics
Role: Executive Committee
Research Areas: Nonlinear dynamical systems, mathematical biology, neural networks, data analytics, emergency rapid response simulation
Research Highlight: High dimensional data clustering, delay processing for memory storage and retrieval, pattern recognition, disease modeling and health analytics