Skip to content

Commit f36f734

Browse files
committed
more research - all clean 3/11
1 parent 81edaef commit f36f734

35 files changed

+317
-115
lines changed

_data/publist.yml

+90-90
Large diffs are not rendered by default.

_data/research.yml

+32-9
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,47 @@
11
- title: "Bio-Sensing and VR"
2-
image: bio.jpg
3-
description: The work explores a more affective, bio-responsive experience for real-time/VR AI singularly or multiuser environments especially where mindfulness and wellness are a goal.
2+
image: t_bio.jpg
3+
description: Bio and brain sensing for interactive and VR AI singularly / multiuser environments especially where mindfulness/wellness are a goal.
44
page: /bioBrainVR
55
highlight: 1
66

77
- title: "AI Virtual Humans"
8-
image: covid.jpg
8+
image: t_covid.jpg
99
description: For real-time/VR AI singularly or multiuser environments especially where mindfulness and wellness are a goal.
1010
page: /virtualHumans
1111
highlight: 1
1212

1313
- title: "AI Anonymization"
14-
image: aianon.jpg
15-
description: Bringing together an interdisciplinary team, we have researched and created a wholly new AI technique to anonymize interview subjects and scenes in regular and 360 videos.
16-
page: /AIAnon
14+
image: t_aianon.jpg
15+
description: Researched and created a wholly new AI techniques to anonymize interview subjects and scenes in regular and 360 videos.
16+
page: /aiAnon
1717
highlight: 1
1818

1919
- title: "AI Cognitive Creativity"
20-
image: aicreative.jpg
20+
image: t_aicreative.jpg
2121
description: AI modelling aspects of human creativity in AI using cognitive science as a basis for our work
22-
page: /AICreativity
22+
page: /aiCreativity
23+
highlight: 1
24+
25+
- title: "AI Modeling Animals"
26+
image: t_aianimals.jpg
27+
description: AI modelling of the behavoir (thinking, movement, expression, ...) of animals and humans.
28+
page: /aiAnimals
29+
highlight: 1
30+
31+
- title: "XR Avatars; Edu, Coaches, Health"
32+
image: t_xrAvatars.jpg
33+
description: XR (VR/AR/3D) systems where a user can expressive as talking 3D avatar with gestures/facial expressions. For education, coaches and health.
34+
page: /xrAvatars
35+
highlight: 1
36+
37+
- title: "AI Modeling Thought & Language"
38+
image: t_aiThought.jpg
39+
description: Cognitive modelling of thought, expression and language using advanced AI systems like AI Knowledge Graphs of thought and chained RAG based LLMs.
40+
page: /aiThought
41+
highlight: 1
42+
43+
- title: "Semantic Meaning & Analysis"
44+
image: t_aiMeaning.jpg
45+
description: Cognitive based anaylsis and modelling of deep Semantic Meaning.
46+
page: /aiMeaning
2347
highlight: 1
24-

_includes/footer.html

+4-7
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,9 @@
33
<div class="container-fluid">
44
<div class="row">
55
<div class="col-sm-4">
6-
7-
<p>&copy 2024 iVizlab. We are part of the <a href="https://www.sfu.ca/siat.html">School of Interactive Art and Tech </a> at <a href="https://www.sfu.ca/">Simon Fraser University</a>.</p>
6+
&copy 2024 iVizlab - - - - - - - - - - - - - - - - - - - - -
7+
<p>We are part of the <a href="https://www.sfu.ca/siat.html">School of Interactive Art and Tech </a> at <a href="https://www.sfu.ca/">Simon Fraser University</a>.</p>
88
Site made with <a href="https://jekyllrb.com">Jekyll</a>; <a href="{{ site.url }}{{ site.baseurl }}/aboutwebsite.html">copy and modify.</a></p>
9-
<a href="{{ site.url }}{{ site.baseurl }}/media">Pix and Media</a>
10-
119

1210

1311
<p> </p><p>
@@ -16,10 +14,9 @@
1614
</div>
1715

1816
<div class="col-sm-4">
19-
Contact:<br />
20-
Steve DiPaola<br />
17+
<a href="{{ site.url }}{{ site.baseurl }}/media">Lab Pix and Media</a> - - - - - - - - - - - -<br />
18+
Contact: Steve DiPaola<br />
2119
Simon Fraser University<br />
22-
250-13450 102 Ave.<br />
2320
Surrey, BC, Canada V3T 0A3<br />
2421
<br />
2522
</div>

_pages/r_AiAnon.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
22
title: "iVizLab - Research"
33
layout: textlay
4-
excerpt: "iVizLab -- Research"
4+
excerpt: "iVizLab -- AI Anonymization"
55
sitemap: false
6-
permalink: /AIAnon/
6+
permalink: /aiAnon/
77
---
88

99
# AI Anonymization
@@ -35,7 +35,7 @@ See main site at [aipaint360.org](https://aipaint360.org)
3535
See main site at [aipaint360.org](https://aipaint360.org)
3636

3737
{% for publi in site.data.publist %}
38-
{% if publi.research contains 'AIAnon' %}
38+
{% if publi.research contains 'aiAnon' %}
3939
<pubtit>{{ publi.title }}</pubtit> by
4040
{{ publi.authors }} -- <pubtit>{{ publi.type }}</pubtit> -- {{ publi.description }}
4141
<br> <a href="{{ publi.url }}">{{ publi.display }}

_pages/r_AiCreativity.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
22
title: "iVizLab - Research"
33
layout: textlay
4-
excerpt: "iVizLab -- Research"
4+
excerpt: "iVizLab -- AI Cognitive Creativity"
55
sitemap: false
6-
permalink: /AICreativity/
6+
permalink: /aiCreativity/
77
---
88

99
# AI Cognitive Creativity

_pages/r_VitualHumans.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,7 @@ Reseachers: Steve DiPaola, Mahdi Davoodikakhki, Andrey Goncharov, Nilay Ozge Yal
1818
Our open-source toolkit / cognitive research in AI 3D Virtual Human (embodied IVA : Intelligence Virtual Agents) : a real-time system that can converse with a human by sensing their emotions and conversation ( via facial emotion recognition, voice stress, semantics of the speech and words) and respond affectively, emotionally (voice, facial animation, gesture, etc) to a user in front of it via a host of gestural, motion and bio-sensor systems, with several in lab AI systems and give a coherent, personality-based conversational answers via speech, expression and gesture. The system uses Unity and SmartBody (USC) API who we have collaborated with for years. We use cognitive modeling, empathy modelling, NLP and a variety of AI-based modules in our system (see papers).
1919

2020

21-
Our **affective real-time 3D AI virtual human** setup with face emotion recognition, movement recognition and data glove recognition. See overview video or specific videos or papers below
22-
21+
Our **affective real-time 3D AI virtual human** setup with face emotion recognition, movement recognition and hand gesture recognition.
2322
<br>
2423
<iframe width="450" height="230" src="https://www.youtube.com/embed/RMLD7jccv_w?rel=0" frameborder="0" allowfullscreen></iframe>
2524
<br>

_pages/r_aiAnimals.md

+40
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
title: "iVizLab - Research"
3+
layout: textlay
4+
excerpt: "iVizLab -- AI Modeling Animals"
5+
sitemap: false
6+
permalink: /aiAnimals/
7+
---
8+
9+
# AI Modeling Animals
10+
11+
12+
RESEARCH :: See [Related papers](#paperSection) :: Related Projects: [Research]({{ site.url }}{{ site.baseurl }}/research)
13+
14+
Reseachers: Steve DiPaola, Bill Kraus
15+
16+
17+
**The Research:**
18+
Our AI work here is in simulating complex (human and) animal behavoir using both Action Selection AI and Neural Networks (see papers). Technology is becoming increasingly incorporated into exhibit design. Both on the floor and extended online. Our AI work focuses on the design of VR AI interactive exhibits for an aquarium gallery. The goal was to use technology to better immerse and engage the visitors in complicated educational concepts about the life of wild belugas. We were interested in encouraging deeper exploration of the content than what is typically possible via wall signage or a video display. The beluga simulation uses extremely realistic graphics and is based on an intelligent system that allows the virtual belugas to learn and alter their behaviour based on the visitor interaction. It was informed by research data from the live belugas, (e.g. voice recordings tied to mother/calf behavior) obtained from interviews with the marine mammal scientists and education staff. Observation and visitor studies determined that visitors rarely visit alone, so the interface was designed to encourage collaboration. It allows visitors and their companions to engage in “what-if” scenarios of wild beluga emergent behavior via a 3D interactive that uses artificial intelligence, physically based animation, and real-time graphics. The program could be linked to the aquarium web site to allow for an extension of the aquarium visitor experience.
19+
<br>
20+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/whale1.jpg" class="img-responsive" width="70%"/>
21+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/whale2.jpg" class="img-responsive" width="70%"/>
22+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/whale3.jpg" class="img-responsive" width="70%"/>
23+
24+
<br>
25+
26+
27+
<div id="paperSection"></div>
28+
29+
30+
**------ PAPERS: AI Modeling Animals ------**
31+
32+
33+
{% for publi in site.data.publist %}
34+
{% if publi.research contains 'aiAnimals' %}
35+
<pubtit>{{ publi.title }}</pubtit> by
36+
{{ publi.authors }} -- <pubtit>{{ publi.type }}</pubtit> -- {{ publi.description }}
37+
<br> <a href="{{ publi.url }}">{{ publi.display }}
38+
{% endif %}
39+
{% endfor %}
40+

_pages/r_aiMeaning.md

+53
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
title: "iVizLab - Research"
3+
layout: textlay
4+
excerpt: "iVizLab -- Semantic Meaning & Analysis"
5+
sitemap: false
6+
permalink: /aiMeaning/
7+
---
8+
9+
# Semantic Meaning & Analysis
10+
11+
12+
RESEARCH :: See [Related papers](#paperSection) :: Related Projects: [Research]({{ site.url }}{{ site.baseurl }}/research)
13+
14+
Reseachers: Steve DiPaola, Suk Choi, Meehae Song, Nouf Abukhodair, Vanessa Utz
15+
16+
17+
**The Research:**
18+
Early Deep Learning systems are trained on huge datasets including text/image pairs that while can embed simple meaning, are still mainly trained on nouns like people, things, ... Our main research area is to go beyond that to deeper Semantic Meaning & Analysis work . Which parses and models more multimodal and deeper semantic meaning to AI work. We do this by both analyzing meaning space ( cognitive science of the arts, gesture, emotion, empathy, creativity, behavior, ...) but also then build models from cognitively and rigorously mapping these spaces that we can then use in our new emerging systems (with application in health, the arts, and education.
19+
<br><br>
20+
Our work into a new method to understand, measure and model "aesthic emotion" (see studies and papers)
21+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s1.jpg" class="img-responsive" width="70%"/>
22+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s2.jpg" class="img-responsive" width="70%"/>
23+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s3.jpg" class="img-responsive" width="70%"/>
24+
Reverse engineering generative visual Ai systems to better understand current meaning making to improve them. Here an emotionally complex prompt “an angry man punching a bag in a crowded joyful gym” and diffusion based output where we then used an advanced system to reverse detect and heatmap the meaning of every word back through the system
25+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s4.jpg" class="img-responsive" width="70%"/>
26+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s5.jpg" class="img-responsive" width="70%"/>
27+
Deep studies with eye tracking, on how an art viewer perceives (cognitively) artwork where we AI redraw masterwork (switch left/right eye detail & lost and found edges near chin). See several papers and over 200 press articles on our findings - like this one [Magic of Rembrandt's Painting Technique Revealed](https://www.livescience.com/9920-magic-rembrandt-painting-technique-revealed.html).
28+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s6.jpg" class="img-responsive" width="70%"/>
29+
Additional studies on deep meaning in aesthics, emotions, creativity, empathy, ... ( see papers).
30+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s7.jpg" class="img-responsive" width="70%"/>
31+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/s8.jpg" class="img-responsive" width="70%"/>
32+
33+
<iframe width="450" height="230" src="https://www.youtube.com/embed/N4Xr6Zm7Fes?rel=0" frameborder="0" allowfullscreen></iframe>
34+
<iframe width="450" height="230" src="https://www.youtube.com/embed/O_FaV-6hahM?rel=0" frameborder="0" allowfullscreen></iframe>
35+
36+
37+
<br>
38+
39+
40+
<div id="paperSection"></div>
41+
42+
43+
**------ PAPERS: Semantic Meaning & Analysis ------**
44+
45+
46+
{% for publi in site.data.publist %}
47+
{% if publi.research contains 'aiMeaning' %}
48+
<pubtit>{{ publi.title }}</pubtit> by
49+
{{ publi.authors }} -- <pubtit>{{ publi.type }}</pubtit> -- {{ publi.description }}
50+
<br> <a href="{{ publi.url }}">{{ publi.display }}
51+
{% endif %}
52+
{% endfor %}
53+

_pages/r_aiThought.md

+44
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
---
2+
title: "iVizLab - Research"
3+
layout: textlay
4+
excerpt: "iVizLab -- AI Modeling Thought & Langauge"
5+
sitemap: false
6+
permalink: /aiThought/
7+
---
8+
9+
# AI Modeling Thought & Langauge
10+
11+
12+
RESEARCH :: See [Related papers](#paperSection) :: Related Projects: [Research]({{ site.url }}{{ site.baseurl }}/research)
13+
14+
Reseachers: Steve DiPaola, Rafael Arias Gonzalez, Nilay Yalcin, Maryam Ahmadzadeh
15+
16+
17+
**The Research:**
18+
Our AI work in cognitive modelling of thought, expression and language using advanced AI systems like AI Knowledge Graphs of thought and chained RAG based LLMs. Here allowing the public to talk faithfully (as possible) to inspiring historical figures like our Picasso and Van Gogh. With Van Gogh, we used our specific AI pre-processing of times of his life, historical events/people and the 700 letters he wrote in his own words to his brother Theo into a knowledge system that then goes through additional 3D facial expression, body gesture, talking, voice AI systems.
19+
<br>
20+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/vg1.jpg" class="img-responsive" width="70%"/>
21+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/vg2.jpg" class="img-responsive" width="70%"/>
22+
23+
Our VR talking, expressing animated character even dynamically morph into painted form based on his emotional state.
24+
<img src="{{ site.url }}{{ site.baseurl }}/images/res/vg3.jpg" class="img-responsive" width="70%"/>
25+
26+
Talking with our Virtual AI Picasso live (created with ivizlab research and [Virtro](https://www.virtro.ca/))
27+
<iframe width="550" height="330" src="https://www.youtube.com/embed/Up7rLNkDkRo?rel=0" frameborder="0" allowfullscreen></iframe>
28+
<br>
29+
30+
31+
<div id="paperSection"></div>
32+
33+
34+
**------ PAPERS: AI Modeling Thought & Langauge ------**
35+
36+
37+
{% for publi in site.data.publist %}
38+
{% if publi.research contains 'aiThought' %}
39+
<pubtit>{{ publi.title }}</pubtit> by
40+
{{ publi.authors }} -- <pubtit>{{ publi.type }}</pubtit> -- {{ publi.description }}
41+
<br> <a href="{{ publi.url }}">{{ publi.display }}
42+
{% endif %}
43+
{% endfor %}
44+

_pages/r_xrAvatars.md

+46
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
---
2+
title: "iVizLab - Research"
3+
layout: textlay
4+
excerpt: "iVizLab; XR Avatars; Edu, Coaches, Health"
5+
permalink: /xrAvatars/
6+
---
7+
8+
# XR Avatars; Edu, Coaches, Health
9+
10+
11+
RESEARCH :: See [Related papers](#paperSection) :: Related Projects: [Publications]({{ site.url }}{{ site.baseurl }}/research)
12+
13+
Reseachers: Steve DiPaola, Mahdi Davoodikakhki, Andrey Goncharov, Nilay Ozge Yalcin
14+
15+
16+
**The Research**
17+
XR (VR/AR/3D) systems where a person (at home or work) can fully expressive themselves as a 3D avatar in desktop or full VR. With and without motion tracking their gestures and facial expressions. For education (fully credited university XR class), as coaches for the elderly and others, and as a trainer in health (like we did here with nurses during covid).
18+
19+
<br>
20+
Our work with fully credited university XR class during COVID (COGS100 at SFU) using XR systems with full mocap avatars, interactive multimedia, ...
21+
<iframe width="450" height="230" src="https://www.youtube.com/embed/RMLD7jccv_w?rel=0" frameborder="0" allowfullscreen></iframe>
22+
<iframe width="450" height="230" src="https://www.youtube.com/embed/mkWEz01Z1kw?rel=0" frameborder="0" allowfullscreen></iframe>
23+
24+
Fully reacting AR coaches in front of you, to discuss the Climate Change or here to be there for an elderly women to help her as an assistant (so she can stay & live well in her home rather than a nursing home).
25+
<iframe width="450" height="230" src="https://www.youtube.com/embed/JS58OBE0TwM?rel=0" frameborder="0" allowfullscreen></iframe>
26+
<iframe width="450" height="230" src="https://www.youtube.com/embed/2NmsT3VgZXg?rel=0" frameborder="0" allowfullscreen></iframe>
27+
28+
Full motion tracked VR interactive for health training nurses (during COVID).
29+
<iframe width="450" height="230" src="https://www.youtube.com/embed/xB1ZPNC1Vdo?t=16?rel=0" frameborder="0" allowfullscreen></iframe>
30+
31+
<div id="paperSection"></div>
32+
33+
34+
<br><br>
35+
**------ PAPERS: XR Avatars; Edu, Coaches, Health ------**
36+
37+
38+
39+
{% for publi in site.data.publist %}
40+
{% if publi.research contains 'xrAvatar' %}
41+
<pubtit>{{ publi.title }}</pubtit> by
42+
{{ publi.authors }} -- <pubtit>{{ publi.type }}</pubtit> -- {{ publi.description }}
43+
<br> <a href="{{ publi.url }}">{{ publi.display }}
44+
{% endif %}
45+
{% endfor %}
46+

_pages/research.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ permalink: /research/
2525

2626
<div class="col-sm-6 clearfix">
2727

28-
<strong> [{{ publi.title }}]( {{ publi.page | relative_url }} )
29-
28+
<strong> [{{ publi.title }}]( {{ publi.page | relative_url }} )
29+
</strong>
3030
<img src="{{ site.url }}{{ site.baseurl }}/images/res/{{ publi.image }}" class="img-responsive" width="33%" style="float: left" />
3131
<p>{{ publi.description }}</p>
3232

images/res/s1.jpg

109 KB
Loading

images/res/s2.jpg

112 KB
Loading

images/res/s3.jpg

148 KB
Loading

images/res/s4.jpg

70.2 KB
Loading

images/res/s5.jpg

121 KB
Loading

images/res/s6.jpg

178 KB
Loading

images/res/s7.jpg

187 KB
Loading

images/res/s8.jpg

92.8 KB
Loading

images/res/t_aiMeaning.jpg

128 KB
Loading

images/res/t_aiThought.jpg

161 KB
Loading

images/res/t_aiThought2.jpg

115 KB
Loading

images/res/t_aianimals.jpg

70.8 KB
Loading
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

images/res/t_xrAvatars.jpg

68.1 KB
Loading

images/res/vg1.jpg

122 KB
Loading

images/res/vg2.jpg

98.6 KB
Loading

images/res/vg3.jpg

232 KB
Loading

images/res/whale1.jpg

77 KB
Loading

images/res/whale2.jpg

122 KB
Loading

images/res/whale3.jpg

147 KB
Loading

0 commit comments

Comments
 (0)