CISC 498
Information Technology Project

School of Computing

Proposed Projects 2019-2020


[ Home | Projects | Resources | Details ]

This page lists potential projects proposed by customers from across the university and Kingston community. This year we have many anxious customers who can use your help! You may choose to pursue one of these projects, or find a customer and project of your own, possibly related to clubs or organizations you are involved with.

Projects from past years developed the Queen's Community Service Learning web portal, project management and secure reporting system, the Queen's squash court booking system, a particle size analysis system for Geology, an artifact archival and secure access system for Classics, and many other systems. Ideally your project should create a software system or product that can serve the customer for many years to come.

A good project will normally involve a human interface (such as a web portal), a persistent database, user roles, secure access isssues, and multiple technologies for you to learn about. But it can also be a challenging computational system or data management problem - it's up to you.

Projects
Each group must claim a different project inform the course coordinator as soon as possible. Before claiming a project, you can contact the corresponding customer of the project to better understand the software system needs.

1. Pain and Locomotor Recovery Analysis—DeepLabCut Interface Development

Customer: Nader Ghasemlou and Ms. Olivia Smith, Pain Chronobiology and Neuroimmunology Laboratory, Department of Biomedical and Molecular Sciences, and Centre of Neuroscience Studies, Queen’s University

The purpose of this project is to develop a user interface for DeepLabCut (see: http://www.mousemotorlab.org/deeplabcut and https://github.com/AlexEMG/DeepLabCut), a newly-developed program allowing researchers to seamlessly analyse pre-recorded videos for movement/motion. DeepLabCut uses pose estimation to track the movement of animals, allowing a researcher to document the recovery process of injury and apply the generated dataset to additional subjects. Acute and chronic pain is a common burden faced by many individuals, particularly after injury/disease. The Pain Chronobiology and Neuroimmunology lab at Queen’s seeks to understand the interactions between immune cells and the nervous system in the pathogenesis of pain and loss of locomotion (for more information, see: www.ghasemloulab.ca). Currently, the recovery process and behaviour of animals is manually documented, leading to observer bias and variability. We now wish to use DeepLabCut to provide a new method of recovery analysis in our animal pain models, helping to further our research in the field of acute and chronic pain. However, this program requires a high level of programming knowledge and access to specialized computer hardware.

Our goal is to have use DeepLabCut to process videos for recovery from pain and locomotor dysfunction from animals and humans. This will require integration with the platform at the Centre for Advanced Computing and a user-friendly interface to allow for data processing using DeepLabCut.

2. System for Social Impact Accounting

Customer: Bahman Kashi, President, Limestone Analytics, Director (CPIA), Dept of Economics, Queen’s University

Governments, international development agencies, NGOs, municipalities, and philanthropists want to maximize the social impact of every dollar they spend. To do so, they need to quantify and measure their impact, processes that are challenging and require significant resources. Many smaller NGOs are unable to afford the analysis it takes to quantify their impact. Therefore, they fail to take their work to scale.

Faculty members at the Economics Department, in partnership with a local company (Limestone Analytics), have developed a new paradigm for quantifying the social impact. The application of this paradigm has enabled governments and large NGOs to conduct rigorous analysis on a higher number of projects and direct their resources towards their best use. Similar to the Unified Modelling Language, this paradigm relies on a set of concepts and visual elements with logical inter-relations.

The paradigm is currently a set of guidelines. Following these guidelines, an analyst can specify the methods in documents. There is no software platform to host the process. However, given the consistent relationship among the elements of this paradigm and its quantitative nature, a software platform can facilitate a much more efficient application of this tool and make the analysis accessible to a broader range of NGOs.

3. “Table 1” - A Data Analysis and Visualization Tool for Biomedical Publishing

Customer: David Maslove, Dept of Critical Care Medicine

Medical research often involves recruiting a group of patients into a clinical trial, and studying their health outcomes. When a medical research project is completed, the researchers submit their findings for publication in the form of a journal article. In these articles, it is essential to provide the reader with an overall picture of what this group of patient is like, so that readers can know whether the results apply to their practice. As such, one of the mainstays of medical research articles is a table — often the very first table, or “Table 1” — that provides these details.

Traditionally, Table 1 has been a text-heavy representation of the data that can be difficult to evaluate. But modern statistical computing software offer us the potential to greatly improve upon this traditional paradigm. Features could be represented graphically, and the software could automate some of the comparisons and statistical summary functions that are required to round out the presentation.

Requirements

  1. The package should accept a basic Excel spreadsheet (or CSV file) as input. The team will develop standards for formatting of the input file that appear in the documentation.
  2. The package should include documentation to explain each of the functions, as well as a vignette to illustrate its use for new users.
  3. The package should conform to requirements such that it can be uploaded to CRAN, the largest public repository of R packages.
  4. The package should be developed to work in concert with tools from the R tidyverse, a comprehensive family of libraries that employ common syntax and structure. Visualizations should use the ggplot2 package.
  5. The tool should handle various data types, including continuous, categorical, and binary.
  6. The tool should implement summary statistics methods (mean, median, standard deviation, interquartile range, etc.), as well as basic statistical tests to compare populations (t-test, Chisquared test, etc.).
  7. The tool should generate output as a text table and a visual “table”

The goal of this project is to develop a software package written for the R statistical computing language that will allow researchers to easily generate a graphical Table 1 from a spreadsheet of data. Development would preferably be in R in order to achieve maximal uptake by the broader medical research community. A python version using bokeh could also be considered as an additional offering.

4. Revved Up Volunteer and Participant Database

Customer: Schuyler Earl and Amy Latimer, Revved UP and DIPA Certificate Coordinator, School of Kinesiology and Health Studies, Queen’s University

Revved Up is a community-based, adaptive exercise program that that promotes physical activity for those with mobility impairments and developmental disabilities. Revved Up runs out of the School of Kinesiology and Health Studies, Providence Care Hospital and the Kingston YMCA. Our program has over 100 participants and 100 students. Because of this, we have a very large amount of paperwork and tracking that currently is all done on paper or in various spreadsheets. We need a database or similar software program that can be used to organize our participants and volunteers, store confidential information, book intake assessment and track attendance. The database also needs to:

5. Navigating SMUDGes: An Interface for Exploring the Faintest Galaxies in the Universe

Customer: Ananthan Karunakaran and Dr. Kristine Spekkens, Department of Physics, Engineering Physics, and Astronomy, Queen's University

Some of the most peculiar galaxies have been discovered in large numbers in the last few years, posing challenges to our theories of galaxy formation and evolution. These objects have physical sizes that are typical of massive galaxies, like the Milky Way, but a tiny fraction of the number of stars. Due to their low density, they have been dubbed "Ultra-Diffuse Galaxies" or UDGs. Our collaboration, SMUDGES (Systematically Measuring Ultra-Diffuse GalaxiES), has set out to study UDGs in detail to determine how they formed and evolved. Using publicly available imaging data, we are searching for UDGs in large regions of the sky and conducting further observations with some of the world's most advanced telescopes. The current size of our sample is approximately 600 UDGs. However, we expect this number to increase into the 1000s as our search expands. As this is an international collaboration, one of the easiest ways to share our results is online.

Our project idea involves creating a web-based applet to present our data in organized format. The primary presentation of this data would show the positions of our sample of UDGs on the sky. The user should be able to select and display UDGs with certain characteristics (i.e. position on the sky, colour, size, etc.). Additional features would include displaying images of the UDGs and providing links to download raw data. This project idea has two main components: a front-end web interface and a back-end which will store the data and images for our sample. Our current estimates suggest that both components should be capable of scaling by at least a factor of 10-30 and still function without any issues.

6. Experimental Game Design, Analytics and Support

Customer: Anya Hageman, Department of Economics, Queen’s University

Do impatient people take more risks? Are they less likely to buy insurance? And how does behaviour change once insurance is purchased? The purpose of this project is to design software which leads users through a survey and a series of specialized Tetris games while recording game features and users’ inputs. These data will be stored securely in a simple database and enable researchers to relate users’ game-playing skills, impatience, risk preferences, and responses to insurance. The software will keep track of a user’s completion rate, completion times, style of game play, and current and cumulative game scores; it will also design an insurance offer based on previous scores, record whether the insurance offer was taken, and if so, adjust the game scoring accordingly. The software must determine the payment awarded to each user on the basis of time spent, task completion, and game scores. If time permits, developers may analyze the data collected in the pilot study with machine learning.

7. Hearing Aid Whistling Detector App

Customer: Philip Jessop, Department of Chemistry, Queen's University.

People who have partial hearing loss wear electronic hearing aids, which sometimes emit an unpleasant high-pitched whistling noise that annoys everyone nearby. The whistling is caused by feedback triggered by earwax buildup, incorrect placement of the hearing aids, or proximity to a sound-reflecting surface like hair, a hat, or a wall. Unfortunately, the wearer of the aids typically can not hear the sound, and is therefore unaware of the discomfort caused to others. A phone app (iPhone preferably, perhaps Android as well) that can detect the whistling and make a vibration alert and an on-screen visible alert would let the user know that an adjustment to the hearing aid is needed. Fortunately modern cell phones and tablets have the capability of detecting such a sound. The app would preferably be continuously functioning in the background so that the user does not have to activate the app every few minutes in order to check if whistling is occurring.

8. Interactive App Exploring David Tudor’s Sound Sculpture Exhibition Rainforest

Customer: Matt Rogalsky, Dan School of Drama and Music, Queen’s University

David Tudor (1924-1996) was an influential musician and composer who began his career as a concert pianist and ended as a creator of unique electronic music. His best-known work Rainforest has as its core concept the idea of using found objects as unconventional loudspeakers: sounds played through the objects are transformed by their acoustic qualities. In the version of the piece known as Rainforest IV, sounds are transduced through many objects suspended throughout a space, so the listener hears them all around them. The sounds transduced through the objects can be from any sources, acoustic or electronic, but should be well-matched with the sonic properties of the objects, and should coexist in the space forming an “electroacoustic ecology”, in Tudor’s words.

https://www.youtube.com/watch?v=1RwQ9nbSFCs

With the proposed app (intended for best use on a tablet device with larger screen), users will be able to explore a history of Tudor’s Rainforest series of works, through a number of pages with sound and images describing the evolution of the series between 1968 and 1973. More importantly, the app will allow users to have a virtual binaural experience of the piece, learning about its principles and experimenting with it interactively. The app will include a library of sculptural objects represented in photographs and impulse responses (short audio files which capture the objects’ resonances). The impulse responses may be convolved with other audio sources to make those sources sound as if heard through the objects: the sonic qualities of the object are imparted to the source sound. A library of source material (including some from Tudor’s own collection) will be provided to be freely combined with the impulse responses in order to hear how each object changes the character of the sound, and users will have the facility to load their own sounds into the app to be heard through the virtual objects.

In addition to being able to freely combine impulse responses (representing physical objects) and source sounds, users can distribute these virtual sounding objects in a 2D space around a listener position, to experience hearing the ‘sculptures’ all around them binaurally, much as they would be heard in a gallery exhibition of Rainforest. A further enhancement would be to allow users to create their own ’walking path’ through the field of virtual objects, so that the listener’s relationship to all of the sculptures is constantly changing as virtual objects come closer or recede into the distance.

The customer, Dr. Matt Rogalsky, is a ‘younger generation’ member of David Tudor’s performance group Composers Inside Electronics which was formed circa 1973. He has been teaching and performing Rainforest since 1998. Together with original CIE members John Driscoll and Phil Edelstein, he created a new version of Rainforest which was purchased in 2017 by the Museum of Modern Art in New York City for its permanent collection. The work will be on exhibit from October 2019 to January 2020.

9. Quantifying Cyanobacterial Blooms from Photogrammetric RGB Aerial Imaging

Customer: Daniel Lefebvre, Yuxiang Wang, and Allen Tian, Department of Biology, Queen’s University

Unmanned aerial vehicles are an emerging technology in the field of environmental monitoring, particularly of cyanobacterial algal blooms, which severely threaten aquatic ecosystems. We have an extensive collection of large RGB, photogrammetrically-stitched, UAV-collected aerial images of aquatic ecosystems and cyanobacterial algal blooms. To streamline and normalize detection and quantification, this project intends to build a WYSIWYG program that can isolate and classify cyanobacterial blooms in .tiff images up to 2 GB. The program must be robust enough to isolate blooms in images with large amounts of noise (from other aquatic plants and variations in water colouration) and potentially uneven radiometric conditions (i.e. areas of the image may vary in brightness) using both their RGB colour and spatial characteristics (such as shape and distribution). The program should be able to utilize input from expert users to correct classification over time. Additional desirable but optional features of the program include the ability to correct/normalize the colour balance of images with set values for reference pixels and the ability to identify other aquatic plants.

10. Accelerator Laboratory Booking System

Customer: Prof. Mark Daymond and Dr. Fei Long, Dept. of Mechanical and Materials Engineering, Queen’s University

The RMTL facility (www.rmtl.ca) is an accelerator-based irradiation facility here at Queen’s, which includes a range of material processing and characterization equipment (e.g., electron microscopes). Users include academics from Queen’s and elsewhere, as well as industry and national labs, with a range of samples including some that are hazardous. We need to develop a database which can be accessed from the web, with the capability to manage booking and record subsequent use of multiple instruments in the laboratory. Users receive training for each given item of equipment and approval to book should be based on completed and up to date training records. We also need to generate regular reports for each user / piece of equipment for billing purposes. The system should be flexible, allowing different classes of users that will have different access and booking privileges and different billing rates; we need the ability to add new equipment as well as users.

11. Equipment Access Control and Usage Tracking with Badge Readers

Customer: Graham Gibson, NanoFabrication Kingston, Queen’s University

NanoFabrication Kingston (NFK) is Queen’s University’s nanofabrication and cleanroom facility (www.nanofabkingston.ca). NFK is an open-access lab where users from Queen’s, other universities, industry, and government are trained to use state-of-the-art equipment to fabricate devices for microelectronics, photonics, microfluidics, and other technologies. We are looking for a software solution to control access to equipment within the facility according to a user’s credentials and to track that user’s use of the equipment for invoicing purposes. Currently, users are trusted to use only equipment they are trained on and to write on a manual log sheet when they start and finish, which is then manually transferred to a spreadsheet by lab staff. All users already have RFID badges for building security, so the goal is to use these same badges to “badge in” at readers located at each piece of equipment (7 total) to activate that tool and register the usage information (name, date, start time). “Badging out” will then deactivate the tool and register the finish time. The software, therefore, must keep a secure database of user names, user qualifications, and usage tracking information all connected with user badge information, and be able to export this data in a reasonable format for reporting and invoicing.

12. School of Nursing Custom Inventory System

Customer: Laura Stephens, Simulation Lab Manager, School of Nursing, Queen's University

The Patient Simulation Lab at Queen’s University SON is a medical simulation lab located in the Cataraqui Building. The proposed project would help us develop custom inventory software to track and maintain our consumable supplies. We currently have over 400 individual items in our space and are lacking a permanent solution for managing this inventory.

Our idea is that if an item is searched by keyword or scanned (label/barcode/QR code), the program (graphical user interface) would show:

13. Urban Birds of Ontario

Customer: Fran Bonier, Associate Professor, Biology Department, Queen's University

We study how birds across Ontario respond to urban habitat. One of the goals of our research is to compile a list of all the birds that breed in Ontario’s cities, with an assessment of how widespread each species is in those urban areas. We want to achieve this using a citizen science approach, by polling people who are familiar with the birds of each city (e.g., bird watchers, ornithologists, naturalists). To do this, we need web-based data collection software that would allow respondents to:

  1. read and acknowledge their understanding of the survey instructions
  2. select the city they are providing data for (from a drop-down list or a map)
  3. select the season they are providing data for (breeding or wintering)
  4. provide data for a list of bird species that we provide based on their responses to 1 and 2 – for each species, they will have 5 options: 0, absent from the city; 1, local occurrence; 2, in between local and widespread occurrence; 3, widespread occurrence in the city; or no data, unsure of this species occurrence in the city.
  5. provide a self-assessment of their knowledge of the birds of the city they have provided data for (scale from 1 to 5, from knowing the city’s birds only a little, to knowing the city’s birds very well)
  6. optional: provide any comments
  7. optional: sign up for our email list, which will send out notes on progress of the research and other updates

We would need the data provided by respondents to be compiled in a database in a way that it would be search-able by species and by city, and ideally easily exported into a stats-friendly format (e.g., .csv file) for our use in analyses and also eventually so that we can provide the public with lists (e.g., .pdf) of species for each city (after we have completed data collection and quality-control of data).

14. foldA (festival of live, digital art) Box Office and Data Collection System (Taken)

Customer: Michael Wheeler, Department of Film and Media, Queen’s University and SpiderWebShow

foldA (festival of live, digital art) is Canada's festival of live digital performance, based out of the Isabel Bader Centre for the Performing Arts every June. foldA features a performance series (between 8-15 shows) from cutting-edge Canadian and international artists working with live digital media and The StartUp, a hands-on industry series with invited guests from around the world. Besides attending shows, we offer StartUp participants workshops with digital tools for experimentation and conditions for rigorous conversation about making live performance in the digital moment. Many of our performances are livestreamed to and from the Isabel on our website, folda.ca. In 2019, close to 800 people attended various aspects of foldA and over 600 people tuned in to our livestreams online.

foldA is powered by artists that are looking for innovative ways to increase efficiency and build new systems that will revolutionize making live performance in the digital moment. The purpose of this project is to develop an aesthetically appealing, creative and easy to use system that will house:

15. Customized Optical Character Recognition (OCR) System to Input the Data Automatically into the Database

Customer: Sarosh Khalid-Khan, Division of Child and Youth Mental Health, Queen's University

In order to better organize the patient data and do the retrospective chart review studies, we are developing a EXCEL based database to capture the patients’ variables from the medical records. Patients’ medical records are stored in the electronic medical record system. Currently, we input the data by browsing every record. Since we capture more 40 variables from the records, it takes approximately 30 minutes to complete one case. We have about 2000 patients to input into the database each year. There are more new patients incoming. In order to expedite it, we would like to develop a customized OCR to extract the keywords for each variable and entry them to the database based on the set requirements. The example will be:

Patient’s family history (written in the medical record): mother has anxiety, father has ADHD. “family history” is the name of variable, so “anxiety” and “ADHD” are the values . So “anxiety” and “ADHD” are recognized by the OCR system and copy to the corresponding column in database.

Family History Development History Medication Diagnosis
Anxiety, ADHD      

How does it look like?

  1. Research assistant log in to the PCS (patient care system), open the EXCEL database and the OCR system at the same time, Patient’s full name and CR number(unique to each patient, patient’s record can be searched in PCS by full name or CR number) are in the database.
  2. OCR system retrieve the patient’s information (full name or CR number) from the database and put it in the search box “of PCS, then patient’s file is open.
  3. Select the visit, then select the “view Demographics” and “patient demographics”, extract the demographics information and entry to the corresponding column of the database.
  4. Select “Chart Review”, search the relevant information and input to the database.

All of these search and entry will be done by the OCR system.

16. Webpage/Application for Online Psychoeducation/CBT Module

Customer: Sarosh Khalid-Khan, Division of Child and Youth Mental Health, Queen's University

Due to the long waiting list, currently the estimate waiting time for patient to access the mental health service is 4-6 months. Psychosocial therapy is one of the effective treatments for mental illness. In order to expedite the patients to access the service, we plan to provide the online psychoeducation/CBT module to the patients in the waiting list.

How does it look like?

The module has 3 components: assessment, intervention, follow-up

Assessment:

  1. Create an assessment tool inventory, clinician can choose the questionnaires from the inventory based on the patient’s need.
  2. Scoring of each questionnaires can be done by the system.
  3. There is a tracking system to record the score of each assessment in the form of table or graph.
  4. Basic statistical functions for the pre and post assessments

Interventions:

  1. We will provide the power point slides and questions for the quiz for the psychoeducation sessions and CBT. We would like to break down the sessions into several components. Between each component, there is a quiz. Participant has to complete one component before they can go the next component. At the end of the session, we have a quiz to review the whole session.
  2. Reminder: I hope the participant to finish the session within 2-3 weeks. There should be a reminder link to the calendar to remind the participant to complete the session.

Follow up:

Messenger: there should be a messenger for clinician to communicate with the participants. Clinician can send out tasks, reminders to participants. Participant can also text the clinician if needed.