Invented by Thong T. Nguyen, Paul D. Schmirler, Timothy T. Duffy, Rockwell Automation Technologies Inc
The Rockwell Automation Technologies Inc invention works as follows
A method can include receiving image data from a user and using the processor to generate a visualization, which may include a virtual automation device. The virtual industrial device can be a virtual object that is displayed within image data. This virtual object could correspond to a real industrial automation device. The method can include displaying the visualization on an electronic display via the processor and detecting a gesture within image data, which may include both the surrounding of the user and the visualization. The gesture could be indicative of an instruction to move the virtual automation device. The method can include tracking a user?s movement via a processor. Generating a visualization, which may include an animated view of the virtual automation device moving in response to the user?s movement via the same processor. Displaying the visualization, via a processor, on the electronic display.Background for Augmented Reality Interaction Techniques
The disclosure is a general description of the design of industrial system. “More specifically, embodiments of this disclosure relate to systems and methodologies for detecting input from users within an augmented-reality environment and displaying and/or modifying visualizations related to an industrial automation system or device based on that input.
Augmented Reality (AR) devices present layers of computer generated content to the user via a display. AR environments can provide users with both real-world and computer-generated content. Augmented reality devices can include a head-mounted device, smartglasses, a virtual retinal screen, contact lenses, computers, or hand-held devices, such as mobile phones or tablets. These devices can be used by operators to help them perform certain tasks in industrial automation environments as AR devices become more widespread. It is therefore recognized that better systems and methods to perform certain tasks in the AR environment could help operators perform their job functions more efficiently.
This section is designed to introduce the reader with various aspects of art which may be related to the various aspects of present techniques that are described or claimed below. This discussion should help the reader to better understand the different aspects of this disclosure. It is important to understand that these statements should not be interpreted as an admission of prior art.
BRIEF DESCRIPTION
Below is a summary of some embodiments described herein. These aspects are presented to give the reader a quick summary of certain embodiments, and are not meant to limit the scope. This disclosure can include a wide range of aspects, some of which may not be described below.
In one embodiment, the system for interacting virtual objects within an augmented reality may include a device mounted on the head. The head mounted device can receive a set of first image data associated with an environment of a user, and create a visualization of a plurality virtual compartments. Each virtual compartment can be associated with a particular type of virtual automation device, and each compartment can include multiple virtual automation devices. Each virtual industrial device can depict a virtual item within the first set image data, and the virtual item may correspond to an actual industrial automation device. The head mounted device can display the first visualisation via an electronic display, and detect gestures in a second image data set that includes the surroundings of the user as well as the first visualisation. The gesture could be indicative of selecting one of the virtual compartments. The head mounted device can generate a second visualisation comprising a plurality of virtual industrial automaton devices associated with the selected selection, and display this second visualization on the electronic display.
In another embodiment, the method can include receiving a set of first image data that is associated with an environment of a user, and then generating a first visualization, which includes a virtual industrial device, using a processor. The virtual industrial device can be configured to display a virtual object in the first set image data, and the virtual object could correspond to a real industrial automation device. The method can include displaying the first visualisation via an electronic display via the processor and detecting a gesture within a second image data set that includes the surroundings of the user as well as the first visualization. The gesture could be indicative of an instruction to move the virtual automation device. The method can include tracking a user’s movement via a processor, creating a second visualisation that includes an animation of a virtual industrial automation device moving in response to the movement and displaying the second visualization using the processor via the electronic display.
In a third embodiment, an electronic medium can include computer-executable code that, upon execution, will cause a processor, to receive first image data associated with the surrounding of a person and create a visualization which may include both a virtual industrial automation system and a virtual industrial automation system. First and second virtual automation devices can depict respective first and secondary virtual objects in the first set image data. The virtual objects can correspond to first and two physical automation devices. The computer-readable media may contain computer-executable instruction that, upon execution, can cause the processor display the first visualisation via an electronic display, and detect a gesture in a set of second image data, which may include surrounding the user and first visualization. The first gesture could be indicative of moving the first virtual automation device towards the second virtual automation device. The computer-readable media may contain computer-executable instruction that, when executed by the processor, may cause it to determine compatibility between the virtual industrial device and second virtual automation device. It may also generate a second visualisation that includes an animation of the virtual automation device coupling with the virtual automation device to create the joint virtual automation device if the two virtual automation devices are compatible.
DRAWINGS
The following detailed description can be better understood with the help of the accompanying drawings. In the drawings, like characters are used to represent similar parts.
FIG. “FIG.
FIG. “FIG. “1, according to an embodiment
FIG. “FIG. According to an embodiment, 2 is displayed before the user performs a first gaze command.
FIG. “FIG. After a first gaze gesture command is performed, perform a second gazing gesture command in accordance to an embodiment.
FIG. “FIG. According to an embodiment, the user can see the visualization in FIG. 5 after performing a second gaze gesture command.
FIG. “FIG. “2, according to an embodiment
FIG. “FIG. According to an embodiment, 2 is displayed after a gaze gesture command has been performed;
FIG. “FIG. According to an embodiment, the user should perform a grab gesture command before displaying the visualization.
FIG. “FIG. According to an embodiment, 2 is displayed after a command for a grab gesture has been executed;
FIG. “FIG. “2, in accordance to an embodiment.
FIG. “FIG. According to an embodiment, the user should perform a push gesture before executing the command.
Click here to view the patent on Google Patents.