Recruitment

Recruitment Status
Active, not recruiting
Estimated Enrollment
Same as current

Summary

Conditions
Retinitis Pigmentosa
Type
Interventional
Phase
Not Applicable
Design
Allocation: N/AIntervention Model: Single Group AssignmentIntervention Model Description: This is a small sample open label feasibility study for an object recognition and localization system based on machine learningMasking: None (Open Label)Primary Purpose: Device Feasibility

Participation Requirements

Age
Between 18 years and 125 years
Gender
Both males and females

Description

The investigators propose to add an object-finding feature to a retinal prosthesis system. To use this feature, the participant will enable a special mode and input the desired object from a set of pre-programmed object types. Imagery from the visible light camera in the system eyeglasses will be pr...

The investigators propose to add an object-finding feature to a retinal prosthesis system. To use this feature, the participant will enable a special mode and input the desired object from a set of pre-programmed object types. Imagery from the visible light camera in the system eyeglasses will be processed using object recognition software as the participant scans their head across the room scene. When the object is identified in the scene by the processor, a flashing icon will be output to the epiretinal array in the appropriate position to guide the participant to the physical location of the object. Once located, the system will track the location of the object. There will be two phases to the human subjects evaluation, each run initially through simulations in sighted human subjects, followed by tests in Argus II participants. In phase 1, system evaluation in human subjects at Johns Hopkins UNiversity (JHU) will explore performance in representative tasks and compare prosthetic visual performance without and with the new object finding feature. An important aspect of the evaluation will be the comparison of different icons and presentation modes to assist participants in locating and reaching objects. In phase 2, the system will be integrated into the Argus II video processing unit (VPU), and JHU will conduct human trials that include functional testing of the integrated prototype in representative environments and optimizing the ergonomics of the system, e.g. simultaneous finding and tracking of multiple objects/icons.

Tracking Information

NCT #
NCT04319809
Collaborators
  • Johns Hopkins University
  • Second Sight Medical Products
Investigators
Principal Investigator: Gislin Dagnelie, PhD Johns Hopkins University