R2 - Remote Accessibility Assessment of the Built Environment for Individuals Who Use Wheeled Mobility Devices

Task Leaders: JongBae Kim, Ph.D.

Co-Investigator:

Other participants:

Image 1

Project Overview

While home modification has come to be recognized as an important intervention strategy for the quality of life of people with disabilities, the availability of skilled professionals with experience in home modifications for accessibility is limited. A Remote Accessibility Assessment System (RAAS) using Virtual Reality was developed by a University of Pittsburgh research team, which enables clinicians to assess the wheelchair accessibility of building environments from a remote location. The system was successful in assessing the accessibility without visiting the remote site. However, this system also had several limitations. In order to overcome these limitations, we have developed several new technologies: a remote imaging system using IP camera; improved 3D reconstruction protocol; web-based decision supporting system via house model on-line delivery; simulated wheelchair maneuvering system via dynamic VRML control. We tested the feasibilities of these technologies by applying them to two actual environments and we could confirm that these new technologies can enhance the efficiency and accuracy of our remote accessibility assessment system. Recently, we started a randomized control research to assess the value of the improved RAAS, in which we will assess 27 homes with two methods (RAAS and On-site Assessment) and compared them.


Project Objectives

The objective of this study is to determine the effectiveness of our remote accessibility assessment system (RAAS) in evaluating the built environments of wheeled mobility device users. Network-based protocols for remote accessibility assessment will be developed using state-of-the-art technologies in the RAAS such as virtual reality, VRML/X3D, XML, photogrammetry, videoconferencing, and remote imaging. We will evaluate the effectiveness of our new method by examining agreement between assessment results obtained using the RAAS and a conventional in person (CIP) method.

The specific tasks or aims of the study include:

Develop and integrate the following RAAS components into the information infrastructure (D1): 1) Tele-Imaging system 2) Web-based Multimedia Decision Support System 3) VR Wheelchair Maneuvering Simulation and 3D Dimension Measuring System

Assess the level of agreement between the CIP method and the updated RAAS method in evaluating the accessibility of a wheelchair user's built environment.


Expected Findings and Deliverables

Although the previous version of RAAS showed it's potential value through field trials, it had following limitations: 1) it was hard for a novice photographer to take good pictures for 3D reconstruction; 2) there was no way to communicate efficiently and sufficiently between evaluator and consumer; 3) 3D models of only subparts of the whole home environment were created; 4) it took time to create 3D model by marking and referencing the shared points in each photos [8]. In this study, we improved the system by overcoming the above limitations through the following developmental works: remote imaging system using an IP camera; improved 3D reconstruction protocol; web-based decision supporting system via house model on-line delivery; and simulated wheelchair maneuvering system via dynamic VRML control.

Tele-Imaging

We sent an undergraduate student with a digital camera and with guidelines for how to take good pictures to the client's home so that the student could take pictures of client's home environment in the previous version of RAAS. But it was not easy for a novice photographer to take 2D photos which are good enough to create 3D models from because he usually cannot understand the 3D reconstruction concepts. Therefore we developed a remote imaging system through which the 3D reconstruction technician can take pictures of the client's home environment from a distance.

Image 2

Figure 1. Web-based Tele-imaging System

In this updated protocol, an IP camera was sent with an undergraduate student to the client's home at the remote location and we connected our research center and client's home via a high-speed internet connection (Cable Modem or DSL). Then the IP camera showed us the remote home environment and a technician directed the student to move, pan, and tilt the camera by phone to where the angle of the camera is appropriate and take a good 2D snapshot from the remote research center. Figure 1 shows the client's side where a part-time undergraduate student wears a cordless phone and headset and holds a remote of pan-tilt module under the camera. It also shows the technician's side where a technician looks at the remote site of the client's home at high frame rate via the video conferencing program provided by the camera company and an ftp program through which he can see the pictures, taken and transferred, at almost real time.

We can find many webcams and surveillance IP cameras on the market these days but most of them only provide a low resolution image snapshot function because they do not want to sacrifice their video frame rate. They want to see video clips with at least a frame every second. Therefore we adapted a high-resolution IP camera, the IQeye 301 (IQinvision, Inc.), which provides better camera coverage, better picture detail and an overall superior IP video experience. This camera was also designed to record the high-resolution images on the local side such as the memory card of the camera or the local computer connected to the camera via USB or Firewire because current internet speeds do not allow the transfer of high-resolution images in real time (i.e. at least a frame per a second). The viewer software allows pictures to be taken manually, but this function opens a file save dialog which requires a filename entry for every picture taken. In addition, this function only saves the photos in the current low-resolution that is required for streaming. Another feature of this camera is a trigger function which causes the camera to take pictures either when the camera sensor detects motion, periodically (every 10 seconds or every minute), or when it is triggered by an external device plugged into the input port. The trigger function can also be used to control another device remotely through the output port using a button on the software viewing screen. To upload our photos to our remote location and in a high-resolution format, we connected a jumper between output and input (Figure 2) which allowed us to remotely trigger the camera's own automatic function. This automatic function was set in the onboard memory of the camera to send a high-resolution image file with a time-stamped filename to an ftp server (Figure 3). This ftp server could be the technician's PC or a local ftp server.

Image 3

Figure 2. Jumpers on camera ports

Image 4

Figure 3. Trigger Settings page showing FTP options

This IQeye camera can take pictures in high-resolution as well as with wide field of view because it provides various focal lengths; from 4.2 to 10mm. We can reduce the time and effort significantly in creating the 3D models because the technician who will create the 3D models can take appropriate pictures himself as an expert while he can see the client's home environment with streaming video. Additionally, he will be using high-resolution and wide angle 2D photos compared to lower resolution, standard angle photos.

3D Reconstruction of the whole House Model

We took advantage of commercial photogrammetry software, Photomodeler Pro (EOS LTD) as a 3D reconstruction tool. When we developed the previous version of RAAS, we used Photomodeler 4.0, in which we could create a 3D model for each subpart of the home environment such as the bathroom, bedroom, hallway, entrance, and etc. but there was no function to incorporate subparts into an integrated 3D home model.

In this study, we created a 3D model of the client's entire home by using the "Open Merged Project" function which was added in Photomodeler Pro 5.0. By defining points within each room that are shared with the adjacent rooms, the software is able to merge the separate room models into one overall house model.

The use of high-resolution and wide angle images enabled us to create much better 3D models more easily and has provided a better view of the virtualized environment. We can compare two models in Figure 4. Figure 4(a) shows a whole home 3D model of a client's house created by using a consumer level digital camera with standard lens and Figure 4(b) shows another 3D model of the same house by the IQeye IP camera. Because we could create the whole 3D home model as well as taking advantage of high-resolution and wide angle IP camera, the evaluator could assess the accessibility of the target home more intuitively and the efficiency of the new RAAS was expected to be greatly improved.

Image 5

A house model created by using a consumer level digital camera

Image 6A house model created by using a IQeye IP camera

Figure 4. Two 3D models created by a digital camera and a high-end IP camera

Web-based Decision Supporting System via House Model Online Delivery

The house model delivery is an important part of this research for data sharing among stakeholders. A web portal environment is used to store and share documents and each house assessment project is hosted in an individual sub portal. The reconstructed 3D house models are posted on the sub portal and are accessible only for the house owner and the professionals involved in the project so that the privacy of the home owner as well as the security of the house can be protected. We took advantages of the Microsoft Office SharePoint Portal Server 2003 as the platform of the web portal with which we the telerehabilitation portal infrastructure was built as a development task of the RERC on Telerehabilitation [25].

Each sub portal contains five pages (Figure 5): main page, documents and lists, create, site settings and help. The main home page is used for the general discussions, opinions, announcements and shared software and the installation guidelines. Secondly, the documents and lists page shows all the document libraries, picture libraries, discussion boards, and surveys in the portal. Home owners can upload pictures, measurements in libraries and their responses to the surveys on this portal. Other project members also can post their opinions and materials, including 3D models and 2D photos, in the discussion board and all libraries. New libraries, lists, discussion boards, surveys, or Web pages can be added to this web portal by using the create page. The site and settings page is used to manage site settings and update personal information or information about the users. Finally, the help page is to access the list of help resources related to portal services like managing lists and libraries, sending a file to review or searching a document.

Image 7

Figure 5. Main Page of a Sub Portal

Simulated Wheelchair Maneuvering System via Dynamic VRML Control

The VRML 3D simulation environment was created by combining a reconstructed house model and a dynamically controlled wheelchair with two sets of navigation controls. As shown in Figure 6, three blue arrows are to rotate and move the wheelchair in a normal mode and three red ones are to rotate and move in a faster mode. Navigation controls move along with the wheelchair to ensure they are always accessible by users throughout the 3D simulation.

Image 8

Figure 6. The integrated 3D environment from reconstructed house model and wheelchair with dynamic controls for Virtual Accessibility Assessment.

Image 9

Figure 7. Illustration of the functions of the navigation controls: (a) Input touch sensor associated with the navigation controls. (b)Orientation Interpolator and the Time sensor associated with Navigation Controls. (c) Declaration of each data type and an initial value in Script node. (d) Links between and event generators and event receivers.

Figure 7 illustrates the functions of the navigation controls. Figure 7(a) shows that touch sensors are attached to each of the navigation controls (Left, Right, Forward, LeftFast, RightFast, ForwardFast). Touch sensors provide interactivity to the users and are activated immediately with mouse clicks at any of the controls. Figure 7(b) shows that an OrientationInterpolator node associated with the turn left touch sensor. The interpolation of animation path is defined by the keys and the corresponding keyValues. TimeSensor is the clock that generates animation events as time goes by. Animation events are the output of an instance of mouse clicks. Figure 7(c) shows an example of the event interface and field declaration with a data type, a name and an initial value. The Script node includes a javascript program specified by a url. The Script node stores the current orientation of the wheelchair. And the javascript program updates the orientation of the wheelchair with its each movement. The wheelchair rotates about the y axis. Figure 7(d) shows that some nodes of ROUTE create the links between event generators and event receivers. A time stamp is given to each event generator, and the same time stamp is given to all event receivers that need to update simultaneously. As a result of the same time stamp, current orientation of the wheelchair is updated to all the navigation controls. When a receiver receives an eventIn, it responds to the event and generates the eventOut.

The dimensions of the individual wheelchair that each client would use in the target environment can be used for the virtual reality wheelchair for each project. Through this simulation system, the wheelchair users and other project members can then try to drive their wheelchair through the target environment from any place. This can be especially helpful to the new wheelchair users who have recently suffered a spinal cord injury. They can try to drive the wheelchair in the virtual reality of their house and decide which parts of the home should be modified for their return while they are still preparing in the rehabilitation hospital. This simulation system would provide an intuitive way for accessibility assessments for the novice as well as the professional evaluator and would be available everywhere via the internet.

Distance Measurement based on 3D Models

Besides the interactive simulation within the web-based 3D environment, we also need to measure the actual distances within the physical built environment for assessment purpose. Regular VRML plug-in software does not provide the function of measurement. Therefore commercial 3D visualization software VCollab� with distance measurement capability is developed by VCollab based on our needs. It supports VRML visualization as well as 3D distance measurement from VRML models. As shown in Figure 8, a VRML model is read and displayed in VCollab� and the distance of the door way is measured.

Image 10

Figure 8. Distance measurement based on 3D models

Video examples of the system

Quick Time Example of the system used to create the modeling environment

Second Video with wheelchair moving within the modeling environment


Project Update

Tele-imaging System, Web-based Multimedia Decision Support System, VR maneuvering simulation system was developed to improve the value of the RAAS. We are conducting a randomized control field test with 27 actual home environments in which wheelchair users like to reside. To date, thirty-two (32) participants have been screened and twenty-seven (27) participants have consented in writing. These 27 subjects have participated in research activities by completing the Consent to Act as a Research Subject and twenty (22) have completed the hard copy or online survey. 20 of the 22 subjects who have completed a survey have also completed the residential photography segment of the study; and at least 16 CIP (Conventional In Person) Evaluations have been made of subjects' built environments.

We started a new project with the University of Hawaii, Accessibility for All: A Pilot Study for Development of a Remote Accessibility Assessment System of the Built Environment for Individuals with Disabilities in the Pacific Islands, which is funded by the NEC Foundation of America and we will apply our developed system to the Hawaiian Islanders. This system will be applied in the workplace assessment project of the RERC on Workplace Accommodation renewed by Georgia Tech.

 


Return to RERC TR 2