Workshop Theme and Goals
Users of digital devices are increasingly confronted with a tremendous amount of notifications that appear on multiple devices and screens in their environment. If a user owns a smartphone, a tablet, a smartwatch and a laptop and an email-client is installed on all of these devices an incoming email produces up to four notifications — one on each device. In the future we will receive notifications from all our ubiquitous devices. Therefore, we need an smart attention management for incoming notifications. One way for a less interrupting attention management could be the use of ambient representations of incoming notifications.
The goal of this workshop is to discuss how the problems of information overload and overchoice — in our opinion two of the most relevant problems in information technology for the next few decades — can be solved. In the era of the Internet of Things (IoT) we have to handle incoming notifications from all our devices. Together with developments in smart city environments or with smart mobility the information overload will grow. In this workshop we want to focus on a larger understanding of the different roles notifications can play in a wide variety of computing environments including the office, the home, in cars, and other smart environments.
Topics of the workshop include, but are not limited to:
- Understanding behavior and habits around notifications
- Detection/prediction of availability, attention, and opportune moments for interruptions
- Ambient, peripheral, distributed and multimodal presentation of information or augmentation
- Timing of pro-active recommendations and user engagements
- Infrastructures, frameworks and tools for the development of smart attention systems
- Strategies for attention management from notifications of IoT devices
- Understanding users' behavior and habits around notifications and interruptions, including longer term user engagement and behavior change
- Use of ambient representations for big data analysis
- Management of information overload in smart city environments and cyber physical systems or smart mobility and vehicle environments
- Notifications in virtual reality (VR) and augmented reality (AR)
- Supporting the digital wellbeing of users
Modeling Human Behavior through Mobile Sensing
Smartphones have graduated over a period of time from merely calling instruments to smart and highly personal devices. Besides being technically advanced and pervasive, these devices have a plethora of embedded sensing capabilities that enables us to passively log users' context and collect such data at an unprecedented scale. In this talk I will be discussing the understanding and modeling of human behavior through mobile sensing and machine learning. More specifically, I will focus on modeling users' interaction with mobile devices, and anticipatory monitoring of health and well-being.
Abhinav Mehrotra is a Machine Learning Research Engineer at Samsung AI Center, Cambridge. His main areas of interest include context-aware computing and AI systems. Before joining Samsung, he was a postdoctoral researcher at University College London. Abhinav Mehrotra received a PhD in Computer Science from University of Birmingham, during which he spent an year at Alan Turning Institute (UK's data science institute).
We are very proud to have received so many excellent submissions. Please find a list of all 5 accepted papers below.
- Connecting IM Pattern and Selective Perceived Responsiveness to Relationship: A Cluster-Based Approach
Hao-Ping Lee, Kuan-Yin Chen, Chih-Heng Lin, and Yung-Ju Chang
- Preferred Notification Modalities Depending on the Location and the Location-Based Activity
Anja Exler, Zeynep Günes, and Michael Beigl
- Attention Computing: Overview of Mobile Sensing Applied to Measuring Attention
Aku Visuri and Niels van Berkel
- Using Electrochromic Displays to Display Ambient Information and Notifications
Heiko Müller, Ashley Colley, Jonna Häkkilä, Walther Jensen, and Markus Löchtefeld
- The Impact of Private and Work-Related Smartphone Usage on Interruptibility
Christoph Anderson, Judith S. Heinisch, Sandra Ohly, Klaus David, and Veljko Pejovic
September 10, 2019
|08:30 am||Registration opens|
|09:30 am - 09:40 am||Introductions|
|09:40 am - 10:30 am||Keynote|
|10:30 am - 11:00 am||Coffee break|
|11:00 am - 12:30 pm||Presentation session|
|12:30 pm - 02:00 pm||Lunch break|
|02:00 pm - 03:30 pm||Break-out discussions|
|03:30 pm - 04:00 pm||Coffee break|
|04:00 pm - 05:00 pm||Discussion of group findings|
|05:00 pm - 05:30 pm||Wrap-up & planning of future actions|
|06:30 pm - 08:00 pm||Workshop dinner|
A paper should be anonymized and should have a length of 2 to 8 pages (excluding references) in the new single-column SIGCHI Extended Abstracts format and will be reviewed by at least two workshop organisers. Successful submissions will have the potential to raise discussion, provide insights for other attendees, and illustrate open challenges and potential solutions. All accepted publications will be published on the workshop website and in the ACM Digital Library.
At least one author of each accepted paper needs to register for the conference and the workshop itself. During the workshop, each paper will be given time for an oral presentation. In addition, there will be room for demonstrations and hands-on sessions.
Template Update (June 17): The required template was changed on short notice. For your initial submission you can submit in the SIGCHI extended abstract format (landscape, single column) with a page limit of 8 pages excluding references. You might already submit in the SIGCHI format (portrait, double column) with 2 to 6 pages including references. In case of acceptance, you would have to submit your camera-ready paper using the SIGCHI format. All template information can be found here. If you have any questions about the submission, feel free to send us an email to email@example.com.
Karlsruhe Institute of Technology
University of Stuttgart
University of Stuttgart
If you have any questions, don't hesitate to get in touch via email: firstname.lastname@example.org
You can also get in touch via our Facebook page: