Ubiquitous sensing will soon allow us to record any moment of our lives. These moments can be restored and used to create radically new ways of aiding human memory. The goal with memory aids is: recalling what matters. This implies retrieving relevant information at the right time to the right extent and in a context- driven way. We are looking for visions and research projects that aim to re-think and re-define the notion of memory augmentation. The goal is to combine technological innovations in ubiquitous computing with basic research questions in memory psychology, thereby elevating memory augmentation technologies from a clinical niche application to a mainstream technology and initiating a major change in the way we use technology to remember and to externalize memory.
In this position paper we will focus on wearable activity recognitions tools in regard to their function of detecting human activities and thus enabling the user to recall everyday experience in a new way. The capabilities of activity recognition to detect, store and present activities to the person who has performed it can not only help to recall the activities but also encourage the user to remember experiences related to the activities. In order to demonstrate this, we present two projects (cases) in which wearable activity recognition is used to support the users’ recall capabilities. In the next step, we present a narrative theory of action and mind, which focuses on how humans retrospectively interpret and structure personal experience in their minds, their so called autobiographical memory. Finally, we present some further concepts and distinctions about what it means to memorize and recall personal data.
Today’s abundance of cheap digital storage in the form of tiny memory cards put literally no bounds on the number of images one can capture with one’s digital camera or camera phone during an event. However, studies have shown that taking many pictures may actually make us remember less of a particular event. In this position paper, we propose to re-introduce the paradigm of old film camera in the context of modern smartphones. The purpose is to investigate how users will behave when a significant capture limitation is imposed in a picture-taking context, and in what kind of pictures this will result. Ultimately, we are interested in the effect on memory recall of such a limitation, and describe a potential study setup that will help us explore this question.
In recent years data collection and communication has become increasingly ubiquitous, to the extent where it is possible to capture and communicate many parts of live experiences. In a novel approach, we propose recording of events, interaction, and annotations in order to access characteristics that communicate the reasoning behind the decision-making of care providers. Recording is done with free-form and implicit data collection, and communication of spatio- chronological characteristics of events, interactions, and annotations are done with augmented interfaces. This enables care providers, who make decisions, to identify what factors have played the most significant role in the decision-making. In the context of chronic care, this research is aiming at, better understanding how to capture and communicate the medical decision- making process. Our preliminary experiments show success in communicating the reasoning processes of the document analysis sessions in a lab environment. We have started to look at how this improves reliability and practice outcomes of the decision-making in real- life medical environment.
In today’s world, there are increasingly many things to remember. Often the information is linked to physical world objects – for instance usage instructions, personal histories, access codes or expiring guarantee dates. Mobile augmented reality (MAR) can provide a design approach, where we can utilize our everyday surroundings and attach information to the items without seemingly modifying their outlook. In this paper, we explore selected MAR scenarios from the augmented human memory point of view. We evaluated these scenarios in a online survey with 19 participants.
Ubiquitous technology has prompted the use of location-based reminders (LBRs) to help people remember to do things while being away from their desks. However LBRs are still not an effective tool for mobile users. Our work explores how to make LBRs better by using theories of memory, in particular prospective memory, and treating the system that captures the LBRs as an external memory aid. With the knowledge from these two pre-existing literature (prospective memory and external memory aids), we set out to explore how to influence the design and the use of LBRs. In this paper, we propose a framework that uses knowledge and principles from cognitive psychology and present how we might be able to improve LBRs. Our ultimate goal is to facilitate human memory recall for prospective tasks.
Experimental work in Life Sciences is done with protective garment to contain harmful agents and to avoid contaminations. This limits the amount of documentation that can be done during experimentation, since pen’n’paper and other equipment is hardly allowed in those environments. Relying on her memory, the scientist has to reconstruct the important details of her experiment later on. Wearable computers, like Google Glass or wrist-worn Smartwatches, can enhance the scientist’s ability to record key information while conducting his experiment. Especially the possibility of hands-free, and implicit interaction with the wearable system creates new possibilities for augmenting the scientist’s memory.
In this position paper we outline a technology concept for making new situations and encounters more familiar and less threatening. Going to new places, interacting with new people and carrying out new tasks is part of everyday life. New situations create a sense of excitement but in many cases also anxiety based on a fear of the unknown. Our concept uses the metaphor of a pin board as peripheral display to automatically provide advance information about potential future experiences. By providing references to and information about future events and situations we aim at creating a “feeling of having already experienced the present situation” (term Déjà vu as defined in the Oxford Dictionary) once people are in a new situation. This draws on the positive definition of the concept of déjà vu. In this paper we outline our idea and use scenarios illustrate its potential. We assess different ways the concept can be realized and chart potential technology for content creation and for presentation. We also present a discussion of the impact on human memory and how this changes experiences.
Lifelogging has much to offer human memory. Traditional lifelogging techniques use wearable cameras to capture a first-person or ‘field’ view. We propose an alternative or complementary approach in which fixed infrastructure cameras provide a third-person or ‘observer’ view of daily events. In this paper we identify key advantages and challenges for a fixed infrastructure approach to lifelogging.
The emerging field of cognitive activity recognition – real life tracking of mental states – can give us new possibilities to enhance our minds. In this paper, we outline the use of wearable computing to engage the user’s mind. We argue that the more personal technology becomes the more it should also adopt to the user’s long term goals improving mental fitness. We present a the concept of computing to engage our minds, discuss some enabling technologies as well as challenges and opportunities.
Lifelogging technologies, the use of sensing technologies to analyze and record one’s lives, is on the rise. Products from industry and research in academia currently focus on using the collected data to support health and fitness. Given these trends, it is only a matter of time before we see mobile sensing technology applied to cognitive tasks, enabling novel research directions and use cases. In this workshop, we explored the implications of ”knowledge logging”, how to record and track what we read, learn, comprehend and how this impacts research towards mind augmentation. The goal of this workshop was to combine innovations in ubiquitous computing with basic research in psychology and cognitive science. The aim was to bring mind augmentation technologies from a niche application in rehabilitation to a mainstream technology and initiating a major change in the way we use technology to externalize our mind. This workshop brought together researchers, designers and practitioners at the intersection of technology and cognitive psychology to discuss elements and viewpoints of knowledge logging, inferring cognitive states and extending our perception.
We demonstrate the idea of "Thermometer for the Mind": the mental state estimating system by a user’s activity log derived from wearable devices. While it is well known that physical and social activities are correlated with the mental state, our aim is to build an application which tracks daily activities automatically, estimates the mental state and visualizes it in an easily understandable way. As a preliminary experiment, we investigated how information about physical activity from a smartphone (step counts) and social activity from a Web service (Twitter post counts) can be used to estimate a user’s mental state. The method is evaluated on one participant’s 5 months recording. The classification accuracy for 3 classes (mood is low, middle and high) is 60%.
The eye movement is an important source of information for the reading analysis. We propose a method for computing a similarity measure between two fixation sequences. In order to estimate the effectiveness of the similarity measure, we investigate whether a high similarity is obtained when two subjects read the same document. A F1score of 0.92 is obtained for retrieving the same document based on the reading similarity.
Eye tracking data has been widely used to analyze our reading behavior. Usually, experiments are carried out with head fixations or by analyzing eye tracking data in large areas such as paragraphs. But if we want to analyze the eye gaze line by line or word by word with a non invasive apparatus, we have to face the mislocation of the recorded eye gaze. The lack of accuracy involves a difficult analysis of the small eyes movements during reading. This paper proposes a method to match lines of gazes with corresponding text lines, using three different methods. We will show that the Dynamic Time Warping is a promising way to measure similarity between a line of gaze and a text line.
We are developing an app that presents photographs in such a way as to increase the user's motivation for activity by activating memory recall and awareness. In this paper, as a initial investigation for this project, we report a photo-network visualization system and a initial experiment conducted to assess its validity. The system groups related pictures in terms of the objects and scenes in the images. The system allows user to view the photographs interactively by selecting nodes in the network. In an experiment, participants reported that viewing the photographs on the system actually improved their memory recall. Furthermore, we uncovered evidence that certain photographs better promoted memory recall than others. We also analyzed the tendencies of the participants when selecting nodes, and whether the photographs elicited feelings of fondness or interest.
My research interest explores sensory augmentation through noninvasive sensory substitution techniques. My current work focuses on designing tactile displays to improve learning and physical skill acquisition. In this paper, I will discuss two tactile display projects that explore this space. The primary project is a wearable vibrotactile array on the forearm developed to provide real time feedback on correct stance for archers. Secondly, we also briefly discuss an add-on for tablet Ebook readers that provides auditory and haptic stimulations.
Cognitive Behavioral Therapy (CBT) shows promise for the effective treatment and prevention for depression. During the long-term treatment with CBT, users (e.g. patients with depression) are given some kind of homework in their daily lives to self-monitor their thoughts and behaviors by using some techniques taught by therapists (e.g. doctors) in the face-to-face session. In this paper, we present an application to support users to do their homework, especially self-monitoring of their behaviors. By collecting lifelogs of them via smart phones, the application estimates their behaviors (activities) based on their lifelogs and shows the results to support self-monitoring by themselves.
This paper describes a novel framework for literature that presents the reader with a multimodal experience (visual, haptic and auditory). We present an initial prototype, extending an iPad with a surface vibration transducer for haptic feedback to augment the reading of a short story. Text provides the reader with verbal information, whereas other senses perceive the context through non-verbal cues. We use our own framework to create this storytelling across senses, integrating text, sound, animation and tactile sensation. We believe with a multi-modal reading experience, users will enjoy reading more, will feel more immersed and remember the narrative better.
In this paper we outline work designed to improve and understand the resuscitation skills of student nurses undertaking medium- fidelity simulation emergency scenarios. We describe the educational and clinical context of the simulation with a particular focus on how the supervision, analysis and feedback on student performance can be augmented by the use of sensors and other devices. Furthermore, we present initial findings from the use of sensors during video captured student scenarios.
With the demographic change and a generally increasing product complexity, there is a growing demand for assistance technology to cognitively support workers during industrial production processes. Many approaches including head-mounted displays, smart gloves, or in-situ projections have been suggested to provide cognitive support for the workers. Recently, research focused on improving the cognitive feedback by using activity recognition to make it context-aware. Thereby an assistance technology is able to detect work steps and provide additional feedback in case the worker makes mistakes. However, when designing feedback for a rather monotonous task, such as product assembly, it should be designed in a way it does neither over-challenge nor under-challenge the worker. In this paper, we sketch out requirements for providing cognitive assistance at the workplace that can adapt to the worker's needs in real-time. Further, we discuss challenges and provide design suggestions.
We present the concept of using eye wear to recognize and ultimately impact the emotional state of a person in this position paper. Eye Wear Computing, as a fairly novel computing paradigm, has potential especially in affect recognition. Eye glasses are an established means to augment us humans. Given we can build smart glasses in a similar form factor, the position is perfect for sensing and to give user feedback. We outline the sensing potential of smart eye wear, discuss our initial efforts to recognize mental load and facial expressions using eye wear and finally give an outlook on the future application space.
With the mobile phone turning into a lifelogging device alongside with the prevalence of wearables, people are able to record, store, and make sense of their daily activities. Using such insights, applications can help monitor physiological data, motivate behavior change, but also create new ways to aid human memory: mobile devices not only allow us to create records of information, but also present us with proactive reminders and instant access to information relevant to the current situation and context serving as cognition support and for retrospection. This workshop brings together practitioners, designers and researchers with the goal of exploring the requirements, challenges and possibilities of mobile cognition, i.e. how to track activities beyond the physical realm, make sense of that data and feed it back to the user in meaningful ways to augment human cognition.
Contemporary psychology theory emphasizes that people are more likely to achieve planned behavior if they are reminded of previous good experiences of that behavior. In this position paper we describe the results of a pilot study exploring the experimental in-the-wild validation of these findings in the context of run exercises. Based on today's technological improvements in data collection (advanced mobile and wearable sensors) and data visualization to capture and replay running exercise experiences, we created an experimental prototype that takes pictures during one's run and, based on the music one was listening to at the time, plays back slide shows of the experience. In an initial pilot study with 10 runners, we equipped 5 runners (the experimental group) with our prototype for 10 days and afterwards interviewed them on how the system influenced their exercise behavior.
There is an urgent, real need to better understand how mobile ICT products and services can be designed to better meet the needs of persons with cognitive disabilities (including older users), and to develop and update standards to ensure that they recommend solutions that are beneficial to this group of users to exploit the true potential of mobile information and communication technologies (ICT). An ETSI (European Telecommunication Standards Institute) expert team is currently developing such guidelines, in collaboration with other standardization bodies (including ISO and W3C), during an 18-month period. The work started in March 2015. At the Mobile HCI 2015 Workshop#3 focused on Mobile Cognition, we intend to present, share and discuss our topic, approach, classification, insight and early draft design recommendations that extend over all five workshop topics. Additionally, we plan to raise issues and topics of common interest with expert colleagues working in the field and invite those interested to collaborate with us during the later phases of the work, to exploit the true potential of mobile ICT, to support people with cognitive disabilities. Last but not least, we would also like to invite leading researchers to guide and assist our work, possibly through direct participation in a reference group.
In this paper we aim to explore how memories can be tied to the context in which they take form and how the process of remembering can be triggered by spatial cues. Through two usage scenarios, we propose the concept of a mobile application that is able to enhance the reminiscence of past episodes, by mapping them on the places in which they happened. The final aim is to provide a "qualitative" representation of the user's spaces, where their physical properties, such as sizes and distances, are merged with user's personal experiences, emotions, values and priorities.
Modern cognitive psychology theories such as Dual Process Theory suggest that the source of much habitual behaviour is the nonconscious. Despite this, most behaviour change interventions using technology (BCITs) focus on conscious strategies to change people's behaviour. We propose an alternative avenue of research, which focuses on understanding how best to directly target the nonconscious via mobile devices in real-life situations to achieve behaviour change.
Life logging emerged as a way to capture and remember more mainly through pictures. However, as life logging becomes increasingly mainstream, the volume of captured content also increases but our capacity for reviewing diminishes. In order to limit picture taking on such devices to only the most memorable moments, we propose a biophysical driven capture process that adapts the camera capture rate based on one's heart rate. In our prototype -- called PulseCam -- an Android smartphone worn on the body acts as the picture capture device, adjusting its capture rate based on one's heart rate as measured by an Android-based smart watch. The purpose of this work is twofold: a) we will examine the potential of PulseCam to capture pictures of significant moments and b) investigate the potential of such pictures to improve one's ability to remember. This paper introduces the general approach, describes our current prototype, and outlines the planned study design.
People engage in mobile decision-making on a daily basis. Spatially aware mobile devices have the potential to support users in spatio-temporal decision situations by augmenting their cognitive abilities or compensating for their deficiencies. In many cases though, this technology has a negative impact on people's spatial learning of the environment, such as during wayfinding. In this position paper we argue that mobile cognition must strive for solutions that find the right balance between immediate goals and longer-term objectives such as spatial learning.
What is "mobile cognition?" This is a term at the early stages of use. This paper explores the opportunities for human-systems interaction technology research and design in framing "mobile" as a modifier for a type of cognition -- cognition on the go. It offers a rationale for foregrounding this brain/body connexion.
In recent years there has been a growing interest in augmenting human cognition (attention, engagement, memory, learning, etc.) through ubiquitous technologies. With the ongoing research and development of near-constant capture devices, unlimited storage, and algorithms for retrieval, the resulting personal data has opened the door to a vast range of applications. In the third rendition of this workshop series, we focus on technologies and applications of capturing and integrating personal memory into everyday use cases. With the question What constitutes as a modern lifelog?, we would like to invite researchers, designers, and practitioners to envision and exchange ideas on how ubiquitous technologies and applications enhance people's memory in everyday life. In this one-day workshop, we will formulate application scenarios for making use of ubiquitous technologies in order to push personal data to an application layer where it is used to support and augment the human mind.
Photos are a rich and popular form for preserving memories. Thus, they are widely used as cues to augment human memory. Near-continuous capture and sharing of photos have generated a need to summarize and review relevant photos to revive important events. However, there is limited work on exploring how regular reviewing of selected photos influence overall recall of past events. In this paper, we present an experiment to investigate the effect of regular reviewing of egocentric lifelogging photos on the formation and retrieval of autobiographic memories. Our approach protects the privacy of the participants and provides improved validation for their memory performance compared to existing approaches. The results of our experiment are a step towards developing memory shaping algorithms that accentuate or attenuate memories on demand.
Recent technological improvements allow us to capture an increasing share of our everyday experiences, e.g. holidays, shopping routines, or sports activities, and store them in a digital format. An interesting avenue to explore in this context is how reviewing such captured content can improve one's memories of the original events. In this position paper, we describe a planned experiment to investigate the impact of such captured recordings (and their subsequent review) on supporting work meetings. We provide the planned study procedure, explain the envisioned apparatus and metric, and describe the technology used to support the review activity.
In this paper, we have conducted an eye tracking experiment by employing an inexpensive, lightweight, and portable eye tracker paired with a tablet. Students were instructed to solve the physics problems by presenting them three coherent representations about a phenomenon: Vectorial representations, data tables and diagrams. The effectiveness of each representation was assessed for three levels of student expertise (experts, intermediates and novices) using entropy-based transition analysis of the gaze data. The results show that students of different skill level (a) prefer different representations for problem-solving, (b) switch between representations with different frequencies, and (c) can be distinguished by the density of representation use. The obtained results confirm earlier findings of physics education research quantitatively which were initially obtained by student interviews and observational studies.
Our ability to focus and concentrate highly fluctuates across the day: at times we are able to work highly focused and at other times we have trouble processing information effectively. The circadian rhythm describes these systematic changes in our daily concentration levels. The idea behind cognition-aware systems is to support users in-situ according to their current cognitive abilities. Such systems are capable of identifying productive phases during the day and provide suggestions for tasks accordingly. In this position paper we present a framework for developing algorithms to derive cognitive states. By being able to detect and predict users' current capacities to take in and process information, such algorithms can help boost productivity, which can result in getting tasks done quicker, communicating more effectively, and processing information more efficiently.
In this paper, we propose the attention extraction method on a textbook of physics by using eye tracking glasses. We prepare a document including text and tasks in physics, and record reading behavior on the document from 6-grade students. The result confirms that students pay attention to a different region depending on situations (using the text as a material to learn the content, or using it as hints to solve tasks) and their comprehension.
With an increasing amount of available sensors lifelogging produces more and more data. Thus, realizing necessary condensation and forgetting processes becomes a challenge. In the last three years we investigated Managed Forgetting, Synergetic Preservation and Contextualized Remembering in the so-called ForgetIT project. Using the Semantic Desktop as an ecosystem we have already applied these approaches to personal information management successfully. With these experiences at hand we think that lifelogging could also benefit from these solutions. On the other hand, achievements and findings of the lifelogging community can help us in realizing one of our newest visions. In this paper we will provide more details of our data condensation, preservation and managed forgetting solutions and show how lifelogging could benefit from them. Additionally, we sketch our newest application scenario of a context-focused work environment that will make use of lifelogging technologies.
We propose a novel wearable system that enables users to create their own object recognition system with minimal effort and utilize it to augment their memory. A client running on Google Glass collects images of objects a user is interested in, and sends them to the server with a request for a machine learning task: training or classification. The server processes the request and returns the result to Google Glass. During training, the server not only aims to build machine learning models with user generated image data, but also to update the models whenever new data is added by the user. Preliminary experimental results show that our system DeepEye is able to train the custom machine learning models in an efficient manner and to classify an image into one of 10 different user-defined categories with 97% accuracy. We also describe challenges and opportunities for the proposed system as an external memory extension aid for end users.
Work in an emergency department is challenging for clinicians and nurses. The fast pace and the large amounts of data captured make the ED an interesting application area for cognitive support. Data from electronic health records can be complemented with sensor data to capture the rich interactions between providers and patients. This data can be used to trigger and augment reflection in-action of providers to ultimately make better decisions. By capturing the personal experience of each individual and relating it to the planned or ongoing changes in care practices, every provider can participate in the ongoing improvement of care practices.
A recurring science fiction theme is the downloading of abilities from another human to one's own mind. Emerging technologies beyond simple audio/video recordings such as: 360° videos, tactile recorders and odor recorders are promising tools to enable skill transfer and empathy. However, the produced large datasets require new means for selecting, displaying and sharing experiences. This workshop will bring together researchers from a wide range of computing disciplines, such as virtual reality, mobile computing, privacy and security, social computing and ethnography, usability, and systems research. Furthermore, we will invite researchers from related disciplines such as psychology and economics. The objective is to discuss how these trends are changing our existing research on sharing experiences and knowledge to augment the human mind.
Our visual perception is limited to the abilities of our eyes, where we only perceive visible light. This limitation might influence how we perceive and react to our surroundings, however, this limitation might endanger us in certain scenarios e.g. firefighting. In this paper, we explore the potential of augmenting the visual sensing of the firefighters using depth and thermal imaging to increase their awareness about the environment. Additionally, we built and evaluated two form factors, hand held and head mounted display. To evaluate our built prototypes, we conducted two user studies in a simulated fire environment with real firefighters. In this workshop paper, we present our findings from the evaluation of the concept and prototypes with real firefighters.
In this position paper we try to extend the discussion about the human augmentation of the mind through reasoning. Memory seems favored but both of them play a major role in the mind, we want to equilibrate it. Those two components of our cognitive system are actually quite intertwined with sometimes similar properties and goals. Both are failing, in many ways, that is why we are interested in it, a slight augmentation to those abilities would have huge impact. Therefore, we made a first attempt to propose an approach of a system capable of augmenting the reasoning of his user. First by sensing his context and detect a kind of cognitive bias for many domains through wearable devices and natural language processing. Then we propose a radical method to debiasing that could be used in different scenarios thanks to augmented reality. Finally, those propositions are a beginning of a much needed bigger work that confronts many challenges discussed at the end.
Electrical Muscle Stimulation (EMS) has recently received an increased amount of attention from the HCI community. It has been used to remote control users for navigation and instrument playing, but also as a method to convey haptic feedback in VR, for example. As EMS devices become commercially available and application research continues, we explore EMS as a modality to convey information through actuation and as a means to induce and communicate emotions and moods. In this position paper, we present the results from two focus groups on using EMS for interpersonal communication as a way to send and receive emoticons through electrical stimulation. We argue that so-called "EMS Icons" have the potential to become part of multimedia experiences and more broadly of User Interfaces as a haptic variant in analogy to visual and auditory icons.
Reading in real life occurs in a variety of settings. One may read while commuting to work, waiting in a queue or lying on the sofa relaxing. However, most of current activity recognition work focuses on reading in fully controlled experiments. This paper proposes reading detection algorithms that consider such natural readings. The key idea is to record a large amount of data including natural reading habits in real life (more than 980 hours from 7 participants) with commercial electrooculography (EOG) glasses and to use them for deep learning. Our proposed approaches classified controlled reading vs. not reading with 92.2% accuracy on a user-dependent training. However, the classification accuracy decreases to 73.8% on natural reading vs. not reading. The results indicate that there is a strong gap between controlled reading and natural reading, highlighting the need for more robust reading detection algorithms.
Users should always be able to receive information when using a head-mounted display (HMD) anytime, anywhere. Users can watch content shown on an HMD hands-free even when moving or working. It seems that presenting specific information affects humans. In this paper, we investigate the effects on listening to speech information that are caused by presenting animation on an HMD. It is difficult to listen to information that is presented in noisy surroundings. If the solution were only to turn up the volume, we would feel uncomfortable because this is very inconvenient. Therefore, by additionally presenting animation, we aim to make it easy for users to listen to speech information. With our method, we use lip-sync animation that matches specific speech information. We performed two experiments to determine whether it is easier to get speech information with animation.
This paper presents two topics. The first is an overview of our recently started project called "experiential supplement", which is to transfer human experiences by recording and processing them to be acceptable by others. The second is sensing technologies for producing experiential supplements in the context of learning. Because a basic activity of learning is reading, we also deal with sensing of reading. Methods for quantifying the reading in terms of the number of read words, the period of reading, type of read documents, identifying read words are shown with experimental results. As for learning, we propose methods for estimating the English ability, confidence in answers to English questions, and estimating unknown words. The above are sensed by various sensors including eye trackers, EOG, EEG, and the first person vision.
Humans and machines work closer together as never before. Whether it is about sensors to expand humans' sensorium, exo-skeletons augmenting physical capabilities, augmented and digital worlds breaking with physical boundaries, or curated digital memories: the value of all these technologies rises and falls with their ability to synchronize with the user's current situation, understand the needs and provide appropriate support. In this position paper we want to outline how semantic technologies can be applied to add more context and meaning to the user's role and task, and use Augmented Reality to present this information to the user. Instead of proposing yet another framework representing world knowledge we describe how to build upon existing standards, descriptions of procedures and routines, and regulations that become machine accessible. This way, machines and humans should be able to work in symbiosis. In the following we describe our motivation, list upcoming challenges and provide a first direction of how to proceed.
This paper proposes a tool for tracking a thought process in web searching because we considered that user's browsing history potentially contains the transition information of a thought process of how to collect information efficiently. In fact, we have few opportunities to look back own browsing way. Even if we check the enormous browsing logs, it is hard to convert them into sharable knowledge. Therefore, our tool measure the dwell time, the amount of scrolling and page contents and transition between pages for evaluating the importance of each page. In addition, our tool provides an editing function to summarize a thought process based on the importance of each page.
Evolution has always been the main driving force of change for both the human body and brain. Presently, in the Information era, our cognitive capacities cannot simply rely on natural evolution to keep up with the immense advancements in the field of Ubiquitous technologies, which remain largely uninformed about our cognitive states. As a result, a so-called "cognitive gap" is forming between the human (users) and the machine (systems) preventing us from fully harnessing the benefits of modern technologies. We argue that a "cognitive information layer", placed in-between human and machine, could bridge that gap, informing the machine side about aspects of our cognition in real time (e.g., attention levels). In this position paper, we present our vision for such a software architecture, we describe how it could serve as a framework for designing and developing cognition-aware applications, and we showcase some application scenarios as a roadmap towards human-machine convergence and symbiosis.
Although regular meditation practice is linked to numerous mental health and cognitive benefits, it is often difficult for beginners to maintain focus during practice and persevere with the activity over time. To tackle this issue, we externalised the ability of self reporting loss of focus by developing a feedback loop that helps the user track and maintain their concentration levels in a non-invasive manner. We hypothesise that the change in breathing pattern can indicate a loss of concentration and the act of audibly amplifying a person's breathing sounds during meditation can help them regain and maintain focus on their breathing, leading to a more effective meditation session. We present experimental designs and findings towards this end.
The tracking of cognitive and physical activity using a wearable device is an emerging research field. While several studies have been performed on large-scale activity tracking using a watch-type wearable device, large-scale activity tracking using an eyewear-type wearable device remains a challenging area owing to the negative effect of such devices on a user's appearance. In this paper, we describe the initial result of a large-scale longitudinal study about the concentration level of users using an eyewear-type wearable device. Our approach is to use an eyewear-type wearable device and a predeveloped mobile application to collect data about eye blinks and head posture. The concentration level of users is estimated based on blink rate, blink strength, and head posture. We collected over 40,000 hours of data, and the result shows the change in concentration in a week and with time.
Transferring human experience is one of fundamental and vital activities in our history. Teachers, trainers, textbooks, instruction movies, etc. has been introduced for the transmission. New methods that enhance transmission of human experiences are always desired according to the social and economic situations. Recent developments of virtual reality (VR), augmented realty (AR) and augmented human (AH) technologies make us expect a future that we can instantaneously (or at least more efficiently) transfer the skills or knowledges from one to another, like in Sci-Fi movies. However, for that, we need to optimize the transmission method for each person. We propose the augmented-human based approach for the experience transmission, and review some factors that are essential but not well focused in past studies.