Invented by Thong T. Nguyen, Paul D. Schmirler, Timothy T. Duffy, Rockwell Automation Technologies Inc

The market for Augmented Reality (AR) interaction techniques has been rapidly growing in recent years. AR is a technology that overlays digital information onto the real world, enhancing the user’s perception and interaction with their environment. With the increasing popularity of AR applications and devices, there is a growing need for innovative and intuitive interaction techniques to make the most of this technology. One of the key factors driving the market for AR interaction techniques is the widespread adoption of AR in various industries. From gaming and entertainment to healthcare and education, AR is being used in a wide range of applications. As a result, there is a need for different interaction techniques that cater to the specific requirements of each industry. For example, in gaming, users may require gesture-based interactions to control characters or objects, while in healthcare, voice commands or touch-based interactions may be more suitable for medical professionals. Another factor contributing to the growth of this market is the advancements in AR hardware and software. With the introduction of devices like smartphones, tablets, and smart glasses, AR has become more accessible to the general public. These devices come equipped with sensors, cameras, and processors that enable more sophisticated interaction techniques. Additionally, the development of AR software platforms and frameworks has made it easier for developers to create AR applications with unique interaction features. The market for AR interaction techniques is also driven by the increasing demand for immersive and engaging user experiences. AR allows users to interact with digital content in a more natural and intuitive way, blurring the lines between the physical and digital worlds. This has opened up new possibilities for businesses to create interactive marketing campaigns, virtual showrooms, and training simulations. As a result, there is a growing need for interaction techniques that can provide users with a seamless and immersive experience. In terms of the competitive landscape, there are several players in the market for AR interaction techniques. Major technology companies like Apple, Google, and Microsoft have invested heavily in AR and are continuously developing new interaction techniques for their respective platforms. Additionally, there are numerous startups and research organizations working on innovative AR interaction techniques, such as hand tracking, eye tracking, haptic feedback, and brain-computer interfaces. However, there are also challenges that need to be addressed in this market. One of the main challenges is the need for standardization and interoperability of AR interaction techniques. As different devices and platforms emerge, there is a risk of fragmentation, where each platform has its own set of interaction techniques. This can create a barrier for developers and limit the adoption of AR applications. Therefore, there is a need for industry-wide collaboration to establish common standards and guidelines for AR interaction techniques. In conclusion, the market for AR interaction techniques is experiencing significant growth due to the increasing adoption of AR in various industries, advancements in AR hardware and software, and the demand for immersive user experiences. As the market continues to evolve, there will be a need for more innovative and intuitive interaction techniques to enhance the user’s interaction with AR applications and devices. Standardization and interoperability will also play a crucial role in ensuring the widespread adoption of AR interaction techniques.

The Rockwell Automation Technologies Inc invention works as follows

A method can include receiving image data from a user and using the processor to generate a visualization, which may include a virtual automation device. The virtual industrial device can be a virtual object that is displayed within image data. This virtual object could correspond to a real industrial automation device. The method can include displaying the visualization on an electronic display via the processor and detecting a gesture within image data, which may include both the surrounding of the user and the visualization. The gesture could be indicative of an instruction to move the virtual automation device. The method can include tracking a user?s movement via a processor. Generating a visualization, which may include an animated view of the virtual automation device moving in response to the user?s movement via the same processor. Displaying the visualization, via a processor, on the electronic display.

Background for Augmented Reality Interaction Techniques

The disclosure is a general description of the design of industrial system. “More specifically, embodiments of this disclosure relate to systems and methodologies for detecting input from users within an augmented-reality environment and displaying and/or modifying visualizations related to an industrial automation system or device based on that input.

Augmented Reality (AR) devices present layers of computer generated content to the user via a display. AR environments can provide users with both real-world and computer-generated content. Augmented reality devices can include a head-mounted device, smartglasses, a virtual retinal screen, contact lenses, computers, or hand-held devices, such as mobile phones or tablets. These devices can be used by operators to help them perform certain tasks in industrial automation environments as AR devices become more widespread. It is therefore recognized that better systems and methods to perform certain tasks in the AR environment could help operators perform their job functions more efficiently.

This section is designed to introduce the reader with various aspects of art which may be related to the various aspects of present techniques that are described or claimed below. This discussion should help the reader to better understand the different aspects of this disclosure. It is important to understand that these statements should not be interpreted as an admission of prior art.

BRIEF DESCRIPTION

Below is a summary of some embodiments described herein. These aspects are presented to give the reader a quick summary of certain embodiments, and are not meant to limit the scope. This disclosure can include a wide range of aspects, some of which may not be described below.

In one embodiment, the system for interacting virtual objects within an augmented reality may include a device mounted on the head. The head mounted device can receive a set of first image data associated with an environment of a user, and create a visualization of a plurality virtual compartments. Each virtual compartment can be associated with a particular type of virtual automation device, and each compartment can include multiple virtual automation devices. Each virtual industrial device can depict a virtual item within the first set image data, and the virtual item may correspond to an actual industrial automation device. The head mounted device can display the first visualisation via an electronic display, and detect gestures in a second image data set that includes the surroundings of the user as well as the first visualisation. The gesture could be indicative of selecting one of the virtual compartments. The head mounted device can generate a second visualisation comprising a plurality of virtual industrial automaton devices associated with the selected selection, and display this second visualization on the electronic display.

In another embodiment, the method can include receiving a set of first image data that is associated with an environment of a user, and then generating a first visualization, which includes a virtual industrial device, using a processor. The virtual industrial device can be configured to display a virtual object in the first set image data, and the virtual object could correspond to a real industrial automation device. The method can include displaying the first visualisation via an electronic display via the processor and detecting a gesture within a second image data set that includes the surroundings of the user as well as the first visualization. The gesture could be indicative of an instruction to move the virtual automation device. The method can include tracking a user’s movement via a processor, creating a second visualisation that includes an animation of a virtual industrial automation device moving in response to the movement and displaying the second visualization using the processor via the electronic display.

In a third embodiment, an electronic medium can include computer-executable code that, upon execution, will cause a processor, to receive first image data associated with the surrounding of a person and create a visualization which may include both a virtual industrial automation system and a virtual industrial automation system. First and second virtual automation devices can depict respective first and secondary virtual objects in the first set image data. The virtual objects can correspond to first and two physical automation devices. The computer-readable media may contain computer-executable instruction that, upon execution, can cause the processor display the first visualisation via an electronic display, and detect a gesture in a set of second image data, which may include surrounding the user and first visualization. The first gesture could be indicative of moving the first virtual automation device towards the second virtual automation device. The computer-readable media may contain computer-executable instruction that, when executed by the processor, may cause it to determine compatibility between the virtual industrial device and second virtual automation device. It may also generate a second visualisation that includes an animation of the virtual automation device coupling with the virtual automation device to create the joint virtual automation device if the two virtual automation devices are compatible.

DRAWINGS

The following detailed description can be better understood with the help of the accompanying drawings. In the drawings, like characters are used to represent similar parts.

FIG. “FIG.

FIG. “FIG. “1, according to an embodiment

FIG. “FIG. According to an embodiment, 2 is displayed before the user performs a first gaze command.

FIG. “FIG. After a first gaze gesture command is performed, perform a second gazing gesture command in accordance to an embodiment.

FIG. “FIG. According to an embodiment, the user can see the visualization in FIG. 5 after performing a second gaze gesture command.

FIG. “FIG. “2, according to an embodiment

FIG. “FIG. According to an embodiment, 2 is displayed after a gaze gesture command has been performed;

FIG. “FIG. According to an embodiment, the user should perform a grab gesture command before displaying the visualization.

FIG. “FIG. According to an embodiment, 2 is displayed after a command for a grab gesture has been executed;

FIG. “FIG. “2, in accordance to an embodiment.

FIG. “FIG. According to an embodiment, the user should perform a push gesture before executing the command.

Click here to view the patent on Google Patents.