Site Issues

We are aware of website issues and are working to correct them (June 15, 2022).

An Invitation to Ukrainian Students

The research funding agencies in Canada have established a temporary emergency fund to support Ukrainian students to continue any disrupted graduate studies at a university in Canada. I know there are more pressing issues than continuing studies for most Ukrainians, and that even getting to Canada is a challenge. That said, if you are interested in HCI and/or InfoVis research, and you are affected by this crisis, I would be happy to talk to you about joining my lab under this emergency funding. I can likely supplement funding to provide for degree completion. Email me directly.


Vialab members to have largest ever presence at the IEEE VIS Conference

This year at the IEEE VIS Conference (Oct 19-25), members of the Ontario Tech University visualization research group Vialab will be at the Vancouver Convention Centre to present research results across the entire spectrum of conference events. Starting on Saturday, visiting researcher Tommaso Elli (Politenico Milano) will present his digital humanities dissertation plans at the doctoral colloquium. On Sunday Christopher Collins (conference posters co-chair) will present a paper co-authored with lab members Adam Bradley and Victor Sawal, Approaching Humanities Questions Using Slow Visual Search Interfaces at the VIS4DH workshop. After lunch, Christopher will give the keynote Mixed-Initiative Visual Analytics: Model-Driven Views and Analytic Guidance at the MLUI workshop.  

Christopher will participate on day one in the panel discussion at the EVIVA-ML Workshop. On Tuesday in the opening plenary, Christopher will accept the VAST Test of Time Award with Martin Wattenberg and Fernanda Viégas for their 2009 paper Parallel Tag Clouds to Explore Faceted Text Corpora.

In the first papers session (aptly titled Provocations) on Tuesday (2:50, Ballroom B), Kyle Hall will present a paper co-authored by lab members Adam Bradley and Christopher Collins on Design by Immersion: A Transdisciplinary Approach to Problem-Driven Visualizations.  On Tuesday afternoon in Room 1, Mariana Shimabukuro will present a short paper H-Matrix: Hierarchical Matrix for Visual Analysis of Cross-Linguistic Features in Large Learner Corpora, a language education visualization created with collaborators from the University of Konstanz.

On Wednesday (10:50am, Room 2+3) Brandon Laughlin will present A Visual Analytics Framework for Adversarial Text Generation at the VizSec Symposium on Visualization for Cyber Security. This work, in conjunction with Ontario Tech researchers from the Faculty of Business and IT, proposes a method to leverage machine learning and human linguistic expertise together to create adversarial examples which can convince both human readers and machine learning classifiers. Later that morning (11:50am, Room 8+15), Menna El-Assady will present our CG&A paper Visualization and the Digital Humanities: Moving Toward Stronger Collaborations written in collaboration with Adam Bradley, Christopher Collins, and collaborators from several institutions. This paper presents the experiences of interdisciplinary collaboration from both the humanities and computer science points of view. The poster reception starting at 5:10pm in Ballroom ABC will include Mariana Shimabukuro’s poster Cross-Linguistic Word Frequency Visualization for PT and EN (displayed in poster position 87 starting Monday for the entire week).

On Thursday afternoon (4:40, Ballroom B) Menna El-Assady will present Semantic Concept Spaces: Guided Topic Model Refinement using Word-Embedding Projections. This paper presents a new method for manipulating linguistic models such as topic models through expressing semantic knowledge using a visual interface, allowing for the training and adjustment of complex black-box models without adjusting obscure model parameters.

On Thursday (11:35, Ballroom A), alumnus Rafael Veras will present his work Discriminability Tests for Visualization Effectiveness and Scalability. When data changes are not visible to viewers of a visualization, then the visualization is not effective. This project introduces a new low-cost method to model human perception to determine the limits of discriminability for visualizations.

At this year’s IEEE VIS, Vialab will present 3 full papers, 1 short paper, 1 symposium paper, 1 workshop paper, 1 poster, 1 CG&A paper, and 1 doctoral colloquium talk! We gratefully acknowledge the funding support of the Canada Research Chairs program and NSERC which has made these projects possible. Open access preprints of papers are available on this website as well as on Arxiv.

Vialab members presenting award-winning work at the ACM CHI Conference on Human Factors in Computing Systems

From Saturday, May 4th to Thursday, May 9th 2019 Dr. Christopher Collins and Dr. Rafael Veras from vialab will be attending the ACM CHI Conference on Human Factors in Computing Systems in Glasgow to promote our research activities. CHI is the premier international conference of Human-Computer Interaction, a place where researchers and practitioners gather from across the world to discuss the latest in interactive technology. The conference is highly selective, accepting only about 23% of submitted works in the full papers track. Both papers co-authored by vialab members received honorable mention awards, which are given to the top 4% of submissions in any given year.

Saliency Deficit and Motion Outlier Detection in Animated Scatterplots (Honorable mention)

Rafael Veras and Christopher Collins

Monday May 6th at 14:00 – Room: Lomod Auditorium

When a data visualization is animated, some points naturally pop out and others are less visible among the clutter. Through a large scale perceptual experiment, we determined which factors are most likely to cause important data elements to be seen or missed. The resulting model can be used to guide the design of visualizations to ensure important data points will be visible.

ActiveInk: (Th)Inking with Data (Honorable mention)

Hugo Romat, Nathalie Henry Riche, Ken Hinckley, Bongshin Lee, Caroline Appert, Emmanuel Pietriga, Christopher Collins.

Tuesday May 7th at 14:00 – Room: Hall1

Interacting with a pen to write thoughts and sketch ideas is a natural way to think through an analysis. In ActiveInk, we merge note-taking with novel ink-driven actions on data visualizations, such as circling a data item to highlight it, or crossing it out to remove it from view.

Other Contributions from Ontario Tech University

Members of Ontario Tech’s Games User Research Group also have a strong showing at CHI 2019!

Full papers:

Let’s Play Together: Adaptation Guidelines of Board Games for Players with Visual Impairment

Frederico da Rocha Tomé Filho, Pejman Mirza-Babaei, Bill Kapralos, Glaudiney Moreira Mendonça Junior.

Wednesday May 8th at 14:00 – Room: Gala

While board games have been rising in popularity in the past decade, they have been largely inaccessible for those with visual impairment. We investigated and evaluated various accessibility strategies to make these games playable to all users, regardless of visual ability, and propose a series of guidelines for the design and evaluation of accessible games.

Aggregated Visualization of Playtesting Data

G√ľnter Wallner, Nour Halabi, Pejman Mirza-Babaei,

Wednesday May 8th at 16:00 – Room: Hall 2

Visualization techniques are currently being employed to help integrate quantitative and qualitative data. This paper proposes an aggregated visualization technique to simultaneously display mixed playtesting data. We evaluate the usefulness of the technique through interviews with professional game developers and compare it to a non-aggregated visualization.

Late-breaking works:

Artificial Playfulness: A Tool for Automated Agent-Based Playtesting

Samantha Stahlke, Atiya Nova, Pejman Mirza-Babaei

Tuesday May 7th at 10:20 am – Room: Hall 4

Playtesting is a crucial part of the game production process, but testing with human users can be incredibly expensive and time-consuming. Our research aims to address these challenges with PathOS – a prototype framework for simulating player navigation in games through the use of AI. PathOS gives developers a cost-effective option to coarsely predict player behaviour, allowing them to pursue informed iteration on their work earlier in the design process.

FRVRIT – A Tool for Full Body Virtual Reality Game Evaluation

Daniel MacCormick, Alain Sangalang, Jackson Rushing, Ravnik Singh Jagpal, Pejman Mirza-Babaei, Loutfouz Zamaan,

Tuesday May 7th at 10:20 am – Room: Hall 4

Testing and evaluating how players interact with VR games often requires watching back hours of footage and manually noting down observations. FRVRIT provides developers a way of recording entire VR sessions and visualizing them at a glance, in their entirety.


User Experience (UX) Research in Games

Instructors: Lennart Nacke, Pejman Mirza-Babaei, Anders Drachen,

Thursday May 9th at 9:00 am to 16:00  – Room: Castle 1 Crown

Vialab member Menna El-Assady to present ‚ÄėThreadReconstructor: Modeling Reply-Chains to Untangle Conversational Text through Visual Analytics‚Äô at EuroVis 2018 in Brno.

On Wednesday, June 6th, 2018, a Vialab member will be presenting a new paper.

‚ÄúThreadReconstructor: Modeling Reply-Chains to Untangle Conversational Text through Visual Analytics‚ÄĚ is lead by Ph.D. student Mennatallah El-Assady in collaboration with the University of Konstanz, and presents a visual analytics approach for detecting and analyzing the implicit conversational structure of discussions. Motivated by the need to reveal and understand single threads in online conversations and text transcripts, ThreadReconstructor combines supervised and unsupervised machine learning models to enable the exploration of generated threaded structures and the analysis of the untangled reply-chains, comparing different models and their agreement.

ThreadReconstructor will be published in the Computer Graphics Forum, volume 37, issue 3, and presented at Eurovis 2018 in Brno.

2016 Course Materials

I have made my slides, assignments, and in-class examples available for students and other instructors who may be interested. These slides are inspired by many others, in particular Dr. Mark Green who often teaches this course at UOIT. In addition, the in-class examples are adapted from a set of examples by Daniel Vogel at Waterloo, and I am grateful to him for making these available. The ray tracing assignment is based on skeleton code and an assignment handout by Dr. Tobias Isenberg. Finally, I am grateful to Dr. Aaron Hertzmann who was my graphics instructor at the University of Toronto and who set a high part for the technical content of a course like this. My course notes were valuable in preparing to teach this topic.

Course topics overview (2016):

  1. Graphics Pipeline
    1. From model to pixels, overview of the basic process
  2. Introduction to Graphics Programming
    1. GLUT, GLEW, and GLM
    2. Vertex and fragment shaders
    3. Transformations, lighting
    4. Geometrical Data
  3. Modeling
    1. Polygons, face and vertex tables, normal vectors
    2. Transformations, matrices, composition of transformations
    3. Homogeneous coordinates
    4. Implicit representations
    5. Parametric representations, piecewise representation, continuity
    6. Cubic curves, canonical form, blending functions
    7. Hermite, natural spline, Cardinal spline, Bezier curve
    8. Hierarchical modeling, OpenGL examples, display lists
    9. Subdivision algorithms
  4. Rendering
    1. Viewing transformations, projections
    2. Hidden surface, z-buffer, BSP trees
    3. Basic lighting, ambient, diffuse and specular reflection
    4. Texture mapping, Mipmaps, texture mapping in OpenGL
  5. Ray Tracing
    1. Local and global illumination
    2. Basic ray tracing technique, reflection, refraction, shadows
    3. Intersection calculations, sphere, plane, polygons
    4. Performance, bounding volumes, grids
    5. Distributed ray tracing, sampling patterns, path tracing
  6. Graphics Hardware
    1. Video, sync, frame buffers, bandwidth issues
    2. 3D acceleration, path to fixed function pipeline
    3. GPU architecture
  7. Introduction to Visualization
    1. Scientific and information visualization
    2. Visual variables and perception
    3. Colour perception & theory
    4. Colour spaces
    5. Scalar and vector visualization techniques
    6. Marching squares and marching cubes algorithms
    7. Volume rendering, transfer functions, volume traversal
  8. Advanced OpenGL programming
    1. Tessellation and geometry shaders
    2. Procedural textures
    3. GPGPU
    4. OpenGL versions
  9. Graphics Application Development
    1. Data file formats
    2. Interaction
    3. Case studies

vialab hosted talk: Dr. Nathalie Henry Riche, Microsoft Research

Nathalie Henry Riche Talk Announcement

Title: Data-driven Storytelling: Transforming Data into Visually Shared Stories

Abstract: In this talk, I will present my most recent research efforts in the field of information visualization and data-driven storytelling. While most of the research in information visualization has been focusing on designing and implementing novel interfaces and interactive techniques to enable data exploration, data visualizations also started to appear as a powerful vector for communicating information to a large audience. Stories supported by facts extracted from data analysis (e.g. data-driven storytelling) proliferate in many different forms from static infographics to dynamic and interactive applications on news media outlets. Yet, there is little research on what makes compelling visual stories or on how to empower people to build these experiences without programming. I will present insights from projects focusing on two different genres of data-driven stories: animations and comics. I will conclude the talk by reflecting on challenges and opportunities in this new research field.

Bio: Nathalie is a researcher at Microsoft Research since 2008. She is a team member of the EPIC (Extended Perception, Interaction, &Cognition) research group led by Ken Hinckley.  Nathalie holds a Ph.D. in computer science from the University of Paris Sud, France; and University of Sydney, Australia. She published her research in leading venues in Human-Computer Interaction and Information Visualization. She has received several best papers nominations and awards for her research and is involved in the organizing and program committees of major visualization and human-computer interaction conferences.

Vialab contributions to IEEE VIS 2017

Vialab members had several contributions to the IEEE VIS conference in Phoenix this month. Our contributions also represented the extent of the lab’s collaborations, from France, Scotland, Germany, Canada, and the USA.

Menna El-Assady (also affiliated with University of Konstanz) presented our paper on progressive learning of topic model parameters, for which we received an honourable mention for best paper! Her framework allows people who do not know about the inner workings of topic models to guide the settings of the parameters by examining the outputs of competing models and “voting” on their preference. Through an evolutionary approach, the topic models are refined without ever having to play with complex settings.

Hrim Mehta presented her work on Data Tours in collaboration with Dr. Fanny Chevalier and colleagues at Inria, in France. Hrim’s poster presented our idea of how to author semi-automated tours of large datasets, which can be used as a narrative overview of datasets for which a static overview would be too cluttered or overwhelming.

Mariana Shimabukuro presented her poster on automatically abbreviating text labels for visualizations. She used a crowd-sourcing platform to gather abbreviation strategies from many participants and simultaneously measured the success of these abbreviations by asking other participants to decode them. The resulting abbreviation algorithm is available as an API to abbreviate your own labels on visualizations made with d3 or other web-based platforms.

Mariana also is a co-author on an IEEE TVCG paper on font size as a data encoding, first-authored by Dr. Eric Alexander of Carleton College and colleagues at the University of Wisconsin. Eric’s talk highlighted the surprising finding that people are much better at judging differences in font size than expected, even when doing so in the presence of biasing factors such as varying length of words. This work lends credibility to the use of font size as a visual encoding, at least for tasks where “which is bigger” is the main question.

Dr. Christopher Collins was a co-organizer of the 2nd workshop on Immersive Analytics, a full-day event at VIS which attracted a number of papers and a whole lot of open research questions.

Dr. Collins, Menna El-Assady, and Dr. Adam Bradley were co-authors on¬†Risk the Drift!¬†Stretching Disciplinary Boundaries through¬†Critical Collaborations between the Humanities and Visualization“,¬†a position paper advocating for flexibility¬†in interdisciplinary research presented at the 2nd Visualization for Digital Humanities Workshop¬†(VIS4DH), which Dr. Collins and Menna El-Assady were also co-organizers.

Vialab member Menna El-Assady presented ‘NEREx: Named-Entity Relationship Exploration in Multi-Party Conversations’ at EuroVis 2017 in Barcelona.

We are pleased to announce that this month, a Vialab member has presented a new paper.

“NEREx: Named-Entity Relationship Exploration in Multi-Party Conversations”¬†was lead by PhD student Mennatallah El-Assady, and presents a visualization used to explore political debates and multi-party conversations. By revealing different perspectives on multi-party conversations, NEREx gives an entry point for the analysis through high-level overviews and provides mechanisms to form and verify hypotheses through linked detail-views.

NEREx will be published in the Computer Graphics Forum, volume 36, number 3, and presented at Eurovis 2017 in Barcelona.


Funded PhD Position in Explainable Artificial Intelligence

Funded PhD Position in Interfaces for Explainable Artificial Intelligence

NOTE: This position is not currently available.

When an artificial intelligence system makes a decision or draws a conclusion, the reasons behind that decision are often obscure and difficult to interpret. In order to trust the outcomes of AI systems, they need to be able to present the rationale behind decisions in understandable, transparent ways. We are seeking a highly-motivated PhD candidate for an interdisciplinary research project across the fields of deep learning, visual analytics, and human-computer interaction. Specifically, the research program will be in Explainable Artificial Intelligence with a focus on creating systems to help people interpret the reasoning behind decisions made by deep learning systems. The selected candidate will join an international collaborative project and will be responsible for the design, implementation, and testing of visualization interfaces connecting to explainable machine learning systems designed by partners on the larger project.

This funded position will be established in the Visualization for Information Analysis Lab (vialab) in the Computer Science PhD program at the University of Ontario Institute of Technology in Oshawa, Ontario, Canada under the supervision of Dr. Christopher Collins. The candidate with collaborate with other team members Dr. Graham Taylor at the University of Guelph and Dr. Mohamed Amer at SRI International.

Depending on performance there is a strong likelihood of one or more paid internships at SRI International during the period of study.


  • Masters degree in Computer Science, Software Engineering, Informatics/Data Science or an equivalent university-level degree and relevant experience
  • Strong and demonstrated programming skills
  • Research, work, or significant course experience in human-computer interaction, visual analytics, or interface design
  • Preference for candidates who also have experience/interest in artificial intelligence or machine learning
  • Able to work as an independent and flexible researcher in interdisciplinary teams
  • Strong English writing and speaking skills


Send the following to

  • Detailed CV
  • Motivation letter explaining your interest in and relevant experience for this project
  • Summary of your Master‚Äôs thesis
  • Transcripts (unofficial are acceptable at this stage; translations are not required at this stage)

Note: The selected candidate will be invited to apply through the official university application process and offers will be conditional on meeting application criteria for the UOIT CS program.


  • Expressions of interest as soon as possible; formal application process to follow for the invited candidate
  • Start date: September 2017 or negotiable
  • Duration: 4 years


The vialab at UOIT, lead by Dr. Christopher Collins, Canada Research Chair in Linguistic Information Visualization, conducts research in information analysis, visual analytics, text and document analytics, and human-computer interaction. The University of Ontario Institute of Technology (UOIT), located in Oshawa, Ontario, advances the discovery and application of knowledge through a technology-enriched learning environment and innovative programs responsive to the needs of students, and the evolving 21st-century workplace. UOIT promotes social engagement, fosters critical thinking, and integrates outcomes-based learning experiences inside and outside the classroom. Oshawa, Ontario is located near the city of Toronto, Canada, where many lab members live.

| © Copyright vialab | Dr. Christopher Collins, Canada Research Chair in Linguistic Information Visualization |