Welcome to CSI 4106!

CSI 4106 - Fall 2024

Author

Marcel Turcotte

Published

Version: Dec 19, 2024 12:40

Preamble


The presentation includes three videos, but we will only watch one. These videos were chosen for their science fiction aspect and their use of generative AI.

Figure’s approach is markedly different from earlier methods. The latter often relied on separate modules to solve different problems. For example, a robot would use a planning algorithm to find an optimal path, and the reasoning was heavily scripted (“on rails”), thus limiting tasks and learning capabilities.

Here, we see an end-to-end approach based on neural networks. Deep learning aims to produce flexible, general, and capable learning systems. Inspired by large language models (LLMs), we are now observing the development of large action models (LAMs).

The videos on the new models from Google DeepMind are equally impressive.

How do you feel about these developments?



Learning objectives

  • Clarify the proposition
  • Discuss the syllabus
  • Articulate the expectations
  • Explore the various definitions of “artificial intelligence”

Our first class is likely to end early, which will leave us some time to discuss after the presentation, if you wish.

I want to clarify my proposal. I have chosen a specific approach to introduce the concepts and I would like to explain the reasons for this choice.

After presenting the course outline and expectations, we will discuss the different definitions of artificial intelligence.

Proposition

Course overview

Calendar description

The roots and scope of Artificial Intelligence. Knowledge and knowledge representation. Search, informed search, adversarial search. Deduction and reasoning. Uncertainty in Artificial Intelligence. Introduction to Natural Language Processing. Elements of planning. Basics of Machine Learning.

Here is the official course description. At the end of this presentation, you will find the Python code I used to produce this audio clip.

This course description dates back several years, placing machine learning at the end of the list.

Aims: Deep learning early

To the larger community of computer science and information technology, AI is usually identified by the techniques grown from it, which at different periods may include theorem proving, heuristic search, game playing, expert systems, neural networks, Bayesian networks, data mining, agents, and recently, deep learning.

  • Deep learning is so dominant that I have chosen to structure everything around it

What does it means?

Good Old-Fashioned AI (GOFAI) relied on hand-crafted knowledge engineering, but it has been largely displaced by machine learning due to the increased availability of data, computing resources, and new algorithms.

. . .

Deep learning has significantly impacted various domains, including natural language processing, robotics, and computer vision.

. . .

However, deep learning has current limitations, particularly in reasoning, where symbolic AI excels and could potentially offer valuable insights.

But also

  • In A Brief History of Intelligence (Bennett 2023), Max Bennett discusses significant milestones in the evolution of human intelligence and draws parallels to advancements in artificial intelligence (AI).
  • Learning itself represents one of the earliest and most extensively understood milestones in the evolution of intelligence.

Curiously, the development of AI has been largely influenced by logical approaches (symbolic AI). The development of AI has been strongly marked by intellectual currents rooted in philosophy and mathematics, rather than in biology and its evolution. Reasoning in philosophy and mathematics relies on complex cognitive functions, perhaps less well understood and having evolved later.

Learning itself represents one of the first and most widely understood steps in the evolution of intelligence.

It would have been logical to approach the study of intelligence by progressing from simpler forms to more complex ones.

Aims: Applied

Many software developers worry that large language models will make human coders obsolete. We doubt that AI will replace coders, but we believe that coders who use AI will replace those who don’t.

  • Whenever possible, concepts will be introduced with code.

Aims: Academic rigour

In pursuing clarity and accessibility, this course aims to strike a balance between informal discourse and the precision required for academic rigour. The objective is for learners to not only grasp but also apply, evaluate, and critically analyze the concepts discussed throughout the course.

Syllabus

Course information

Web sites

Schedule

  • Lectures: Monday 13:00-14:20 and Wednesday 11:30-12:50 FSS 2005
  • Office hours: Wednesday 15:00-16:20 STE 5106
  • Official schedule: www.uottawa.ca/course-timetable

Grading

Category Percentage
Assignments 40% (4 x 10%)
Quiz 20%
Final examination 40%

Reading material

I will draw upon insights from the two comprehensive textbooks listed below, as well as relevant scientific publications. Additionally, all sources of information will be cited. For most people, I expect that my lecture notes will be sufficient.

The Campus Store has ordered a small number of copies of these books, for those interested.

We do not closely adhere to the framework proposed by (Russell and Norvig 2020) and (Poole and Mackworth 2023). Specifically, while these textbooks use the concept of an intelligent agent as a central theme, fields such as machine learning (ML), natural language processing (NLP), and vision operate as distinct communities. In these communities, problems are typically not framed in terms of agents.

There are two websites to use. On my personal site, you will find presentations and code examples. On Brightspace, you will submit your assignments and participate in discussion groups.

During class, visit my personal website. There, you can review the complete syllabus, the course schedule, information about the team, and the brief biography of the instructor.

Beta testers

This will be my first iteration of this content. Your help identifying what works and what doesn’t will be most appreciated.

Warnings

CSI 4106 is an introductory course on artificial intelligence, offering a brief overview of various topics within this broad field. Each topic covered could be explored in much greater depth through one or more graduate-level courses. The primary objective of CSI 4106 is to provide students with a foundational understanding of the core areas that constitute artificial intelligence.

. . .

Overlaps with other courses are inevitable, but I will do my best to keep it at a minimum.

. . .

This is not a course on the impact of AI on society, including ethics, fairness, trust and safety.

This warning is actually for myself. These are topics I am passionate about, and I would love to share everything I know with you. However, that is obviously not possible. Generally, I will try to focus on a small number of approaches to thoroughly understand the subjects, rather than adopting an exhaustive approach.

Setting the Stage: AI, Deep Learning, and Diverging Views on Intelligence.

AI, ML, DL

Attribution: Avimanyu786SVG version: Tukijaaliwa, CC BY-SA 4.0, visited 2024-06-18.

Deep learning is so prevalent today that some people might confuse it with artificial intelligence. As the figure shows, deep learning is one of many techniques used in machine learning. Machine learning, in turn, is one of several disciplines within artificial intelligence. Other AI disciplines include knowledge representation, reasoning and planning, natural language processing, computer vision, and robotics.

By the end of this course, this distinction should be very clear.

Schools of thought

  • Symbolic AI (includes approaches based on logic)
  • Connectionists (mostly neural networks)

Long seen as mutually exclusive

At the outset of this course, it is important to recognize that two main schools of thought exist in AI: symbolic AI and connectionism. Initially, the symbolic approach was dominant in the field of AI, but today, the connectionist approach prevails.

Symbolic AI

  • “Their founding tenet held that knowledge can be represented by a set of rules, and computer programs can use logic to manipulate that knowledge.” (Strickland 2021)
  • “Researchers developing symbolic AI set out to explicitly teach computers about the world.” (Strickland 2021)
  • “(\(\ldots\)) a physical symbol system has the necessary and sufficient means for general intelligent action.”
    (Newell and Simon 1976)

Note the importance of the word “explicitly” in this statement. It is not about providing examples to the computer, but rather about describing human knowledge using logic.

The researchers of the time were convinced that the symbolic approach was the key to success.

Symbolic AI

The Towers of Hanoi is a puzzle that consists of three pegs and a number of disks of different sizes. The puzzle starts with all the disks stacked in decreasing size on one peg, and the goal is to move the entire stack to another peg, following these rules:

  1. Only one disk can be moved at a time.
  2. A disk can only be placed on top of a larger disk or on an empty peg.

D1, D2, D3, P1, P2, P3 are symbols, where D1, D2, and D3 are disks, and P1, P2, and P3 are pegs.

Action Move(X,Y,Z):
    Preconditions = {Clear(X), On(X,Y), Clear(Z), Smaller(X,Z)};
    Effects = {-On(X,Y), Clear(Y), On(X,Z), -Clear(Z)};
On(D1, D2), On(D2, D3), On(D3, P1),
clear(D1), clear(P2), clear(P3),
Smaller(D1, D2), Smaller(D1, D3), Smaller(D2, D3),
Smaller(D1, P1), Smaller(D1, P2), Smaller(D1, P3),
Smaller(D2, P1), Smaller(D2, P2), Smaller(D2, P3),
Smaller(D3, P1), Smaller(D3, P2), Smaller(D3, P3).
On(D1, D2), On(D2, D3), On(D3, P3).
Move(D1, P1, P3)
Move(D2, P1, P2)
Move(D1, P3, P2)
Move(D3, P1, P3)
Move(D1, P2, P1)
Move(D2, P2, P3)
Move(D1, P1, P3)

Towers of Hanoi

(for your information only)

See also: Binary, Hanoi and Sierpinski, Part 1 and Part 2, by 3Blue1Brown.

Connectionist

Attribution: LeNail, (2019). NN-SVG: Publication-Ready Neural Network Architecture Schematics. Journal of Open Source Software, 4(33), 747, https://doi.org/10.21105/joss.00747 (GitHub)

Connectionist

Inspired by biology, artificial neural networks (ANNs) are computational models designed to mimic the human brain’s network of neurons. They consist of layers of interconnected nodes (neurons), each connection having an associated weight.

. . .

ANNs process input data through these weighted connections, and learning occurs by adjusting the weights based on errors in the training data.

The term “connectionists” comes from the idea that nodes in these models are interconnected. Instead of being explicitly programmed, these models learn their behavior through training.

Deep learning is a connectionist approach.

Definnig AI


Attribution: Melanie Mitchell, Substack post titled: The Turing Test and Our Shifting Conceptions of Intelligence, August 15, 2024.

Context

  • Significant hype.
  • Several leading companies, including DeepMind and OpenAI, are primarily focused on the development of artificial general intelligence (AGI).
  • Should AGI become a reality, its implications could profoundly impact various aspects of our lives, including education, employment, economic structures, and scientific advancements.

AI thus has great potential to influence the course of your lives. It is therefore relevant to take the time to define what AI is and to explore its different approaches.

I should warn you right away that there is no consensus on the definition of AI, nor on the definition of intelligence.

An operational definition

To the larger community of computer science and information technology, AI is usually identified by the techniques grown from it, which at different periods may include theorem proving, heuristic search, game playing, expert systems, neural networks, Bayesian networks, data mining, agents, and recently, deep learning.

Why definning AI?

  • Enhancing Clarity and Communication: Establishing a mutual understanding of evaluation criteria, benchmarking tests, and milestones.

  • Addressing Ethical, Social, and Legal Implications: Enabling policymakers to develop regulations that minimize AI-related risks.

The complex nature of intelligence

. . .

“Artificial” pertains to the creation of entities or phenomena that mimic natural processes using technology or synthetic materials, a definition broadly recognized and accepted.

. . .

Therefore, defining “artificial intelligence” fundamentally requires us to first clarify what we mean by “intelligence.” Surprisingly, “[d]espite a long history of research and debate, there is still no standard definition of intelligence.” (Legg and Hutter 2007)

Question 1

  • Does the concept of intelligence exist independently from the entities that exhibit it?
    • Using computation as an analogous concept can help clarify this question. Theoretically, computation is often viewed abstractly, separate from its physical implementation. The Church-Turing thesis, a core principle in computer science, asserts that any computation performable by a machine (specifically, a Turing machine) is universally applicable. This implies that the concept of computation can exist independently of its specific hardware or physical form.

I have thought a lot about this question over the summer. For me, an interesting definition of intelligence would be similar in nature to that of computation.

Question 2

  • Can a machine exhibit human-level intelligence?
    • Currently, no machine possesses human-level intelligence. However, AI applications, particularly in analyzing large datasets, have already proven to be highly impactful.
    • What would it mean if we were not able to create machines with human-level intelligence?

I must confess something to you: until 2022, I was rather skeptical. I often drew a parallel between artificial intelligence and alchemy. Alchemists never succeeded in transmuting lead into gold because it is a physical process, but their efforts led to the development of chemistry. Similarly, we may never be able to replicate human intelligence, but the offshoots of AI are immense and often unexpected, like the garbage collector introduced by Lisp.

Then it struck me: I am a scientist, and I wondered what it would mean if we were not able to produce intelligence comparable to that of humans.

A distinction is often made between general AI and narrow AI. Although it might seem pejorative, there is nothing wrong with narrow AI.

Thinking, acting, humanly, rationally

Russell & Norvig considers two axes: thinking vs behaviour, human vs rationality.

Thinking Acting
Human-like thinking humanly (simulation) acting humanly (Turing test)
Rationality thinking rationally (logic) acting rationally (agent)

See also the appendix – Section 8 On Defining Artificial Intelligence

The question of intelligence has been the subject of much debate in the literature and the media, particularly when it comes to animals or computers. This abundance of information can bias our thinking.

A simple thought experiment might offer a new perspective: would you be able to recognize intelligence in an extraterrestrial entity?

Rationality

Rationality involves the evaluation of choices to achieve a goal or to find the optimal solution to a problem. Simon (1972, p. 161) defined rationality as “a style of behavior that is appropriate to the achievement of given goals, within the limits imposed by given conditions and constraints.”

Essential abilities

No one knows where the borderline between non-intelligent behavior and intelligent behavior lies, in fact, to suggest that a sharp border exists is probably silly. But essential abilities for intelligence are certainly:

  1. to respond to situations very flexibly;
  2. to take advantage of fortuitous circumstances;
  3. to make sense out of ambiguous or contradictory messages;
  4. to recognise the relative importance of different elements of a situation;
  5. to find similarities between situations despite differences which may separate them;
  6. to draw distinctions between situations despite similarities which may link them;
  7. to synthesize new concepts by taking old concepts and putting them together in new ways;
  8. to come up with ideas which are novel.

For certain complex concepts, drawing a clear boundary can prove challenging. Take the concept of life, for example. Humans, plants, and insects are considered living, as are microorganisms such as bacteria. However, viruses and viroids are not.

François Chollet, Creator of Keras

François Chollet

Real intelligence is not about mastering an individual skill, he argued, but about taking what has been learned and applying it to a new, different situation.

In his view, intelligence is the ability to efficiently acquire new skills that training did not prepare for, with the goal of accomplishing tasks that are sufficiently different from those a system has seen before.

The wider the scope of the new skills, the closer the computer comes to achieving artificial general intelligence.

“If you can make the learning process as information-efficient as a human mind, then you’ve got AGI,” Chollet said.

So far, machines lag far behind, approximately 10,000 times less efficient than human brains. For instance, it took millions of images to teach computers to recognize pictures of cats, whereas humans learn to identify them based on only one or two examples.

Savage (2024)

Artificial General Intelligence (AGI)

Artificial general intelligence (AGI) refers to a form of artificial intelligence (AI) that either equals or exceeds human proficiency across a diverse array of cognitive functions.

AKA human-level intelligence. As opposed to narrow intelligence, the current status of AI, which is designed to perform a specific task or a limited range of tasks, operating under predefined constraints and without general cognitive abilities.

AlphaFold (1, 2, & 3)

I repeat, there is nothing wrong with narrow AI.

  • «Two papers in this week’s issue dramatically expand our structural understanding of proteins. Researchers at DeepMind, Google’s London-based sister company, present the latest version of their AlphaFold neural network.»

    • Jumper et al. (2021)

AI effect/paradox

(\(\ldots\)) as soon as a computer system is built to solve a problem successfully, the problem is no longer “only solvable by the human mind,” so does not need intelligence anymore. Consequently, “AI is whatever hasn’t been done yet”** (Hofstadter, 1979; Schank, 1991), which is known as “the AI Effect(McCorduck 2004).

The paradox of AI is quite fascinating. In the early days of AI, researchers focused on problems like differential equations or chess.

Each time a computer solves one of these major problems, we come to think that perhaps that problem didn’t really require intelligence to be solved after all.

The defeat of Garry Kasparov at the hands of IBM’s Watson is a perfect example.

Impact

Economical

McKinsey research estimates that gen AI could add to the economy between $2.6 trillion and $4.4 trillion annually while increasing the impact of all artificial intelligence by 15 to 40 percent.

In fact, it seems possible that within the next three years, anything not connected to AI will be considered obsolete or ineffective.

Subfields of AI

  1. Machine Learning: Credit card fraud detection
  2. Deep Learning: Image and facial recognition
  3. Natural Language Processing: Virtual assistants like Siri or Alexa
  4. Computer Vision: Autonomous vehicles
  5. Robotics: Industrial automation in manufacturing
  6. Expert Systems: Medical diagnosis support
  7. Speech Recognition: Voice-to-text transcription services
  8. Planning and Decision Making: Supply chain optimization
  9. Reinforcement Learning: Game AI in complex strategy games
  10. Knowledge Representation: Semantic web technologies for information retrieval

Our Final Invention

AI expert Kai-Fu Lee predicts that its impact will be “more than anything in the history of mankind.”


What Does the AI Boom Really Mean for Humanity? | The Future With Hannah Fry. Bloomberg Originals, posted on YouTube on 2024-09-12.

Prologue

Summary

  • Discussed the syllabus
  • Explored the various definitions of “artificial intelligence”

Next lecture

  • A brief history of artificial intelligence
  • Jupyter Notebook tutorial

References

Bennett, Max S. 2023. A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. First edition. New York: Mariner Books.
Jumper, John, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, et al. 2021. Highly accurate protein structure prediction with AlphaFold.” Nature, 1–11. https://doi.org/10.1038/s41586-021-03819-2.
Legg, Shane, and Marcus Hutter. 2007. A Collection of Definitions of Intelligence.” In Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms:, 17–24. NLD: IOS Press. https://doi.org/10.5555/1565455.1565458.
McCorduck, Pamela. 2004. Machines Who Think, A Personal Inquiry into the History and Prospects of Artificial Intelligence. Taylor & Francis Group, LLC. https://doi.org/10.1201/9780429258985.
Mohammed, Anne-Marie, Sandra Sookram, and George Saridakis. 2019. “Rationality.” In Encyclopedia of Law and Economics, edited by Alain Marciano and Giovanni Battista Ramello, 1766–74. New York, NY: Springer New York. https://doi.org/10.1007/978-1-4614-7753-2_404.
Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Commun. ACM 19 (3): 113–26. https://doi.org/10.1145/360018.360022.
Nilsson, Nils J. 2005. “Human-Level Artificial Intelligence? Be Serious!” AI Mag. 26 (4): 68–75. https://doi.org/10.1609/AIMAG.V26I4.1850.
Poole, David L., and Alan K. Mackworth. 2023. Artificial Intelligence: Foundations of Computational Agents. 3rd ed. Cambridge University Press.
Russell, Stuart, and Peter Norvig. 2020. Artificial Intelligence: A Modern Approach. 4th ed. Pearson. http://aima.cs.berkeley.edu/.
Savage, Neil. 2024. “Beyond Turing: Testing LLMs for Intelligence.” Commun. ACM, June. https://doi.org/10.1145/3673427.
Strickland, Eliza. 2021. “The Turbulent Past and Uncertain Future of AI: Is There a Way Out of AI’s Boom-and-Bust Cycle?” IEEE Spectrum 58 (10): 26–31. https://doi.org/10.1109/MSPEC.2021.9563956.
Wang, Pei. 2019. On Defining Artificial Intelligence.” Journal of Artificial General Intelligence 10 (2): 1–37. https://doi.org/10.2478/jagi-2019-0002.

Appendix: On Defining Artificial Intelligence

Wang (2019)

An agent and its interaction with the environment are specified as a tuple: \[ \langle P,S,A \rangle \] where

  • \(P\) represents a sequence of input signals, \(P = \langle p_0,\ldots,p_t \rangle\)
  • \(S\) represents a sequence of internal states, \(S = \langle s_0,\ldots,s_t \rangle\)
  • \(A\) represents a sequence of actions, \(A = \langle a_0,\ldots,a_t \rangle\)

For a sequence of moments, \(0,\ldots,t\).

Human (H) vs Computer (C)

AI is conceived as computer systems that are similar to the human mind in a certain sense, though a computer and a human mind cannot be identical in all aspects.

\[ \langle P^H,S^H,A^H \rangle \approx \langle P^C,S^C,A^C \rangle \]

. . .

Wang (2019) proposes 5 perspectives: Structure-AI, Behavior-AI, Capability-AI, Function-AI, and Principle-AI.

1. Structure-AI

(brain modelling, cognitive science)

I call this type of definition “Structure-AI,” since it requires an AI system to go through isomorphic states or structure changes as the brain does when they are given similar input, which will produce similar output, so the three components of the two are pairwise similar to each other:

\[ P^H \approx P^C, S^H \approx S^C, A^H \approx A^C \]

2. Behaviour-AI

(Turing Test)

One way to acknowledge a human-like mind without demanding a human-like brain is to associate intelligence to the external behaviors of the agent. After all, if an agent behaves like a human, it should be considered as intelligent, no matter whether it looks like a human, either inside or outside.

\[ P^H \approx P^C, A^H \approx A^C \]

3. Capability-AI (Employment Test)

In the agent framework, it means that \(C\) is similar to \(H\) in the sense that there are moments \(i\) and \(j\) that:

\[ p_i^C \approx p_j^H, a_i^C \approx a_j^H \]

the action (solution) the computer produces for a percept (problem) is similar to the action produced by a human to a similar percept (\(\ldots\)) In this way, the intelligence of a system is identified by a set of problems it can solve, while whether they are solved in the “human way” does not matter.

Capability-AI (contd)

“I suggest we replace the Turing test by something I will call the ‘employment test.’ To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines”

4. Function-AI

In the agent framework, this “Function-AI” perspective takes \(C\) to be similar to \(H\) in the sense that there are moments \(i\) and \(j\) that:

\[ a_i^C \approx f^C(p_i^C), a_j^H \approx f^H(p_j^H), f^C \approx f^H \]

Here the function can correspond to searching, reasoning, learning, etc., and since the focus is on the functions (i.e., input-output mappings), the concrete input and output values of the two agents do not have to be similar to each other.

6. Principle-AI (rationality, logicist)

As in any field, there are researchers in AI trying to find fundamental principles that can uniformly explain the relevant phenomena. Here the idea comes from the usage of intelligence as a form of rationality (\(\ldots\)) that can make the best-possible decision in various situations, according to the experience or history of the system.

\[ A^C = F^C(P^C), A^H = F^H(P^H), F^C \approx F^H \]

The above \(F\) is often not formally specified, but described informally as a certain “principle,” which is not merely about a single type of problem and its solution, but about the agent’s life-long history in various situations, when dealing with various types of problems.

Code of the day

# -*- Mode: Python -*-
# 01_tts_course_description.py
# Author          : Marcel Turcotte
# Created On      : Tue Feb 13 16:29:41 2024
# Last Modified By: Marcel Turcotte
# Last Modified On: Tue Feb 13 17:22:44 2024

from openai import OpenAI

# Initialize OpenAI client
client = OpenAI()

# Creating an audio text ouput
def create_audio(prompt, speech_file_path):
    with client.audio.speech.with_streaming_response.create(
        model="tts-1-hd",
        voice="nova",
        input=prompt,
    ) as response:
        response.stream_to_file(speech_file_path)


# Translating to French
def translate_to_french(input_text):
    output_text_fr = ""
    try:
        # Create translation response
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[
                {
                    "role": "system",
                    "content": "You will be provided with a text in English."
                },
                {
                    "role": "system",
                    "content": "Your task is to translate it into Canadian French."
                },
                {
                    "role": "system",
                    "content": "Note that the course code CSI4106 translates to CSI4506 in French."
                },
                {
                    "role": "user",
                    "content": input_text
                }
            ],
            temperature=0.2,
            max_tokens=4095,
        )
        output_text_fr = response.choices[0].message.content
    except Exception as e:
        print(f"Error: {e}")
    return output_text_fr

# Define file path and input text
speech_file_path_fr = "01_tts_course_description-fr-nova.mp3"
speech_file_path_en = "01_tts_course_description-en-nova.mp3"

input_text_en = (
    'Welcome to CSI4106, "introduction to artificial intelligence"! '
    "In this course, you will learn about the roots and scope of Artificial Intelligence. "
    "Knowledge and knowledge representation. Search, informed search, adversarial search. "
    "Deduction and reasoning. Uncertainty in Artificial Intelligence. "
    "Introduction to Natural Language Processing. Elements of planning. Basics of Machine Learning."
)

input_text_fr = translate_to_french(input_text_en)

create_audio(input_text_fr, speech_file_path_fr)
create_audio(input_text_en, speech_file_path_en)

Marcel Turcotte

Marcel.Turcotte@uOttawa.ca

School of Electrical Engineering and Computer Science (EECS)

University of Ottawa