Skip to content

Latest commit

 

History

History
89 lines (68 loc) · 2.8 KB

index.md

File metadata and controls

89 lines (68 loc) · 2.8 KB
title
Home

Perception and LANguage (PLAN) Group

Our Perception and LANguage (PLAN) research lab is broadly interested in multimodal machine learning and learning with limited supervision. We are particularly interested in building intelligent task assistants that can fuse linguistic, visual, and other types of modalities to perceive and interact with the world. Current language + vision projects involve multimodal representation learning, contrastive self-supervision, embodied AI, video localization and multi-agent communication. Applications include healthcare, medical imaging, manufacturing and misinformation detection.

{:.center}

{% include link.html type="github" icon="" text="GitHub" link="PLAN-Lab" style="button" %}

{% include banner.html image="images/members/lab_picture.jpg" %}

{% include section.html full=true %}

News

{% include news.html %}

{% include section.html %}

Funding

Our work is made possible by funding from several organizations. {:.center} {% include banner.html image="images/funding/funding_merged.png" %}