close

Вход

Забыли?

вход по аккаунту

JP2016506639

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2016506639
Methods and apparatus for providing tactile control of sound are provided and described as
implemented in a system that includes a sound transducer array with a touch surface enabled
display table. The array is sound rendered (in the configuration where the array contains many
speakers), or in a direction-focused space where the sound pickup (in the configuration where
the array contains many microphones) reduces interference from other directions Configured to
run a group of transducers (multiple speakers and / or microphones) configured to perform
individual processing of signals for a group of transducers so that they can have patterns (or
sound projection patterns) Included transducers. The user may adjust visualization parameters
associated with the sound projection pattern by interacting with the touch surface while
receiving visualization feedback by executing one or more commands on the touch surface. The
commands may be adjusted in accordance with visual feedback received from changes in the
display on the touch surface. [Selected figure] Figure 10
Method and apparatus for providing tactile control of sound
[0001]
This application claims the benefit of priority from US Provisional Patent Application Ser. No.
119 (e), filed on Nov. 14, 2012, entitled "Device and System for Refreshing a Sound Field in a
Physical Space". No. 61 / 726,451, filed on Nov. 14, 2012, entitled "Method and Apparatus for
Providing Tangible Control of Sound", filed on Nov. 14, 2012 No. 61 / 726,441, entitled Device
and System Having Smart Directional Conferencing, and US Provisional Patent Application No.
61/726, entitled Collaborative Document Review and Editing, filed Nov. 14, 2012. Claim
priority to 461 and their benefits.
03-05-2019
1
[0002]
[0002]The various features relate to methods, apparatus and systems for providing tactile
control of sound.
[0003]
[0003]
Current approaches to sound control use audio combined with a standard interface.
For example, current approaches to controlling audio algorithms involve the use of conventional
interfaces.
Their counterparts in keyboards, mice, buttons, bars, menus or software are used to adjust
different parameters to derive the algorithm. A more intuitive and simpler control may be
possible in the software user interface space, but the control is rarely in the same space as the
sound is happening.
[0004]
[0004]
SUMMARY A simplified summary of such aspects is provided below to provide a basic
understanding of one or more aspects of the disclosure. This summary is not an extensive
overview of all of the contemplated characteristics of the present disclosure, but it is also
possible to identify key elements or significant elements of all aspects of the present disclosure;
It is also not intended to delineate the scope of the embodiment. Its sole purpose is to present
some concepts of one or more aspects of the present disclosure in a simplified form as a prelude
to the more detailed description that is presented later.
[0005]
[0005]
The first example provides a method for controlling the sound field. The method includes
displaying in physical space one or more sound projection patterns associated with the
03-05-2019
2
transducer array, wherein the sound transducer array is in communication with the physical
space. Physical space may include, for example, touch sensing surfaces, display screens, and
tablet displays. Once the graphical representation of the one or more sound projection patterns is
displayed, the at least one command directed to the graphical representation modifies one or
more projection patterns of the sound transducer array based on the at least one command. To
be detected. At least one command may include multiple tappings, drawing one or more circles
around one or more projection patterns, and multiple ones or more sound projection patterns to
operate as a group. It may be a gesture including grouping sound projection patterns together.
[0006]
[0006]
Displaying the one or more sound projection patterns may include generating a graphical
representation of the one or more sound projection patterns based on characteristics of the one
or more sound projection patterns. The characteristics include at least one of: beam width,
direction, origin, frequency range, signal to noise ratio, receiver or generator type of one or more
sound projection patterns. The graphical representation may be a scale proportional to the
characteristics of the one or more sound projection patterns, wherein the scale comprises a oneto-one correspondence with the characteristics of the at least one sound projection pattern.
[0007]
[0007]According to one aspect, the graphical representation may include a color scheme
corresponding to each of the one or more sound projection patterns, the color scheme including
mapping of the characteristic to color space.
[0008]
[0008]
According to another aspect, the location of the sound transducer array with respect to physical
space may be determined such that an origin of the graphical representation based on the
location can be generated.
Further, the orientation of the sound transducer array with respect to physical space may be
determined such that an orientation vector can be generated as a reference on the orientation
based graphical representation. The orientation of the sound transducer array may be relative to
03-05-2019
3
the sound transducer array.
[0009]
[0009]
According to one aspect, the sound transducer array includes multiple microphones, the multiple
microphones being located in the car. The sound transducer array further includes a number of
speakers, the plurality of speakers being located in the car.
[0010]
[0010]
According to one aspect, detecting at least one command may include detecting user interaction
in physical space and decoding interaction that determines a desired action by the user. The
desired action may include hiding a graphical representation of one or more sound projection
patterns. After the one or more sound projection patterns are modified, the graphical
representation of the one or more sound projection patterns on physical space may be updated.
The user interface may be displayed on physical space to allow the user to select a desired action.
The desired action is to select one or more sound projection patterns for application of the
second action thereto, or one or more virtual groups of one or more sound projection patterns It
may further include generating. The graphical representation includes an illustration of one or
more virtual groupings, the illustration includes one or more virtual layers, and each of the one
or more virtual groupings is one of the one or more virtual layers. Corresponds to at least one.
[0011]
[0011]According to one aspect, a second interaction of the user in physical space is decoded and
a second desired action by the user, the second desired action is applied to one or more virtual
groupings, The desired operation of can be decoded.
[0012]
[0012]
Another example provides a sound transducer array in communication with physical space to
control a sound field.
03-05-2019
4
The sound transducer array may include a speaker array, a microphone array in communication
with the speaker array for capturing sound, and at least one processor in communication with
the microphone array. The at least one processor may be configured to display, in physical space,
one or more projection pattern graphical representations associated with the sound transducer
array, wherein the sound transducer array is in communication with the physical space. Physical
space may include, for example, touch sensing surfaces, display screens, and tablet displays.
[0013]
[0013]
Once the graphical representation of the one or more sound projection patterns is displayed, the
at least one processor may be further configured to detect at least one command directed to the
graphical representation. At least one command directed to the graphical representation may be
detected to modify one or more sound projection patterns of the sound transducer array based
on the at least one command. At least one command is a gesture that includes tapping, drawing
one or more circles around one or more sound projection patterns, and grouping the sound
projection patterns. .
[0014]
[0014]
Displaying the one or more sound projection patterns may include generating a graphical
representation of the one or more projection patterns based on characteristics of the one or more
projection patterns. The characteristics include at least one of: beam width, direction, origin,
frequency range, signal to noise ratio, receiver or generator type of one or more sound projection
patterns. The graphical representation may be a scale proportional to the characteristics of the
one or more sound projection patterns, wherein the scale comprises a one-to-one
correspondence with the characteristics of the at least one sound projection pattern.
[0015]
[0015]According to one aspect, the graphical representation may include a color scheme
corresponding to each of the one or more sound projection patterns, the color scheme including
mapping of the characteristic to color space.
03-05-2019
5
[0016]
[0016]
According to another aspect, the at least one processor can be further configured to determine
the location of the sound transducer array with respect to physical space and to generate an
origin of the location-based graphical representation.
Additionally, at least one processor is configured to determine an orientation of the sound
transducer array with respect to physical space and to generate an orientation vector such that a
reference on the orientation based graphical representation is generated. sell. The orientation of
the sound transducer array may be relative to the sound transducer array.
[0017]
[0017]
According to one aspect, the sound transducer array includes multiple microphones, the multiple
microphones being located in the car. The sound transducer array further includes a number of
speakers, the plurality of speakers being located in the car.
[0018]
[0018]
According to one aspect, at least one processor for detecting at least one command detects a
user's interaction in physical space and decodes an interaction that determines a desired action
by the user Can be further configured. The desired action may include hiding a graphical
representation of one or more sound projection patterns. After the one or more sound projection
patterns are modified, the graphical representation of the one or more sound projection patterns
on physical space may be updated.
[0019]
[0019]
03-05-2019
6
In accordance with one aspect, the at least one processor can be further configured to cause the
user interface to be displayed on physical space to allow the user to select a desired action. The
desired action is to select one or more sound projection patterns for application of the second
action thereto, or one or more virtual groups of one or more sound projection patterns It may
further include generating. The graphical representation includes an illustration of one or more
virtual groupings, the illustration includes one or more virtual layers, and each of the one or
more virtual groupings is one of the one or more virtual layers. Corresponds to at least one.
[0020]
[0020]In accordance with one aspect, the at least one processor is further configured to: decode
a second interaction of the user in physical space; and decode a second desired action by the
user, the second The desired operation is applied to one or more virtual groupings.
[0021]
[0021]
A third example provides a sound transducer array in communication with physical space to
control a sound field.
The sound transducer array may include means for displaying, on physical space, one or more
projection pattern graphical representations associated with the sound transducer array, wherein
the sound transducer array is in communication with the physical space. Physical space may
include, for example, touch sensing surfaces, display screens, and tablet displays.
[0022]
[0022]
The sound transducer array comprises means for detecting at least one command directed to the
graphical representation and means for modifying one or more sound projection patterns of the
sound transducer array based on the at least one command. It can further include. At least one
command consists of a number of taps, drawing one or more circles around one or more sound
projection patterns, and one or more projection patterns to operate as a group And grouping
together a plurality of sound projection patterns.
03-05-2019
7
[0023]
[0023]
In order to display one or more sound projection patterns, the sound transducer array is
configured to generate a graphical representation of the one or more sound projection patterns
based on the characteristics of the one or more sound projection patterns. May be included. The
characteristics include at least one of: beam width, direction, origin, frequency range, signal to
noise ratio, receiver or generator type of one or more sound projection patterns. The graphical
representation may be a scale proportional to the characteristics of the one or more sound
projection patterns, wherein the scale comprises a one-to-one correspondence with the
characteristics of the at least one sound projection pattern.
[0024]
[0024]According to one aspect, the graphical representation may include a color scheme
corresponding to each of the one or more sound projection patterns, the color scheme including
mapping of the characteristic to color space.
[0025]
[0025]
According to another aspect, the sound transducer array may further include means for
determining the location of the sound transducer array in physical space, and means for
generating an origin of the graphical representation based on the generated location.
In addition, the sound transducer array further includes means for determining the orientation of
the sound transducer array with respect to physical space, and means for generating an
orientation vector as the reference on the graphical representation is generated based on the
orientation. sell. The orientation of the sound transducer array may be relative to the sound
transducer array.
[0026]
[0026]
According to one aspect, the sound transducer array includes multiple microphones, the multiple
03-05-2019
8
microphones being located in the car. The sound transducer array further includes a number of
speakers, the plurality of speakers being located in the car.
[0027]
[0027]
According to one aspect, detecting at least one command may include detecting user interaction
in physical space and decoding interaction that determines a desired action by the user. The
desired action may include hiding a graphical representation of one or more sound projection
patterns. Further, the sound transducer array may include: means for updating a graphical
representation of the one or more sound projection patterns on the physical space after the one
or more sound projection patterns have been modified.
[0028]
[0028]
According to one aspect, the sound transducer array may further include means for displaying on
physical space to allow the user to select a desired operation. The desired action is to select one
or more sound projection patterns for application of the second action thereto, or one or more
virtual groups of one or more sound projection patterns It may further include generating. The
graphical representation includes an illustration of one or more virtual groupings, the illustration
includes one or more virtual layers, and each of the one or more virtual groupings is one of the
one or more virtual layers. Corresponds to at least one.
[0029]
[0029]According to one aspect, the sound transducer array further comprises: means for
decoding a second user interaction in physical space; and means for decoding a second desired
action by the user The desired operation is applied to one or more virtual groupings.
[0030]
[0030]
A fourth example causes at least one processor to display, in physical space, one or more
projection pattern graphical representations associated with the sound transducer array when
03-05-2019
9
executed by the at least one processor. , One or more instructions for controlling the sound field,
the sound transducer array providing a computer readable storage medium in communication
with the physical space.
Physical space may include, for example, touch sensing surfaces, display screens, and tablet
displays.
[0031]
[0031]
Once the graphical representation of the one or more sound projection patterns is displayed, the
at least one processor may be further configured to detect at least one command directed to the
graphical representation. At least one command directed to the graphical representation may be
detected to modify one or more sound projection patterns of the sound transducer array based
on the at least one command. At least one command is a gesture that includes tapping, drawing
one or more circles around one or more sound projection patterns, and grouping the sound
projection patterns. .
[0032]
[0032]A computer readable storage medium, containing one or more instructions for controlling
a sound field when executed by the at least one processor detects at least one command directed
to the graphical representation in the at least one processor. And modifying the one or more
sound projection patterns of the sound transducer array based on the at least one command, and
updating the graphical representation of the one or more sound projection patterns in physical
space. It can be done.
[0033]
[0033]In accordance with one embodiment, a computer readable storage medium, when
executed by at least one processor, displays the one or more sound projection patterns on the at
least one processor, the one or more sound projection patterns being Further included is one or
more instructions that cause generating a graphical representation of the one or more sound
projection patterns based on the characteristic.
[0034]
[0034]According to one embodiment, a computer readable storage medium, when executed by at
03-05-2019
10
least one processor, determines at least one processor the location of a sound transducer array
with respect to a management space, and the origin of the graphical representation based on the
location The method further comprises one or more instructions that cause the generation of.
[0035]
[0035]According to one embodiment, a computer readable storage medium, when executed by at
least one processor, determines at least one processor the orientation of the sound transducer
array with respect to physical space, and on the orientation based graphical representation. The
method further includes one or more instructions for generating an orientation vector as a
reference.
[0036]
[0036]
According to one aspect, the sound transducer array includes multiple microphones, the multiple
microphones being located in the car.
The sound transducer array further includes a number of speakers, the plurality of speakers
being located in the car.
[0037]
[0037]
According to one embodiment, a computer readable storage medium, when executed by at least
one processor for detecting at least one command, detects at least one processor user interaction
in physical space, and a user The method may further include one or more instructions to cause
decoding of the interaction to determine the desired behavior according to.
The desired action may include hiding a graphical representation of one or more sound
projection patterns.
[0038]
[0038]In accordance with one embodiment, a computer readable storage medium, when
03-05-2019
11
executed by at least one processor, decodes the second interaction of the user in physical space
into the at least one processor, and a second desired by the user Decoding at least one processor,
and the second desired operation is applied to one or more virtual groupings.
[0039]
[0039]
Various features, properties and advantages will become apparent from the detailed description
given hereinafter, when taken in conjunction with the drawings in which like reference numerals
are correspondingly identified throughout.
FIG. 1 illustrates an example of a typical system using a sound transducer array.
FIG. 2 illustrates a system that includes a sound transducer array and a device having a touch
sensitive screen.
FIG. 3 illustrates an example of an array as utilized to create a privacy zone for voice
communication. FIG. 4 illustrates an example of a spatialized vehicle navigation system using an
array. FIG. 5 illustrates an example of utilizing an array for a surround sound experience. FIG. 6
illustrates an example of utilizing an array to deliver multiple audio programs simultaneously in
different directions without interfering with one another. FIG. 7 illustrates an example of
representing a sound field in physical space, according to one embodiment. FIG. 8 illustrates an
example of a sound field visualization system in which sound fields are symbolically represented
by arrows. FIG. 9 illustrates an example of representing, in physical space, a sound field
visualization image that indicates that the sound transducer array is not directed to the talking
individual according to one embodiment. FIG. 10 illustrates the touch surface-enabled table of
FIG. 9 showing the dragging commands illustrated as arrows on the table. FIG. 11 illustrates the
tuned sound field of the array being visualized as an updated sound field visualization image,
which is currently properly oriented so that the array receives sound from the individual Show
the participants what they are doing. FIG. 12 illustrates a block diagram of a sound field
visualization and control system. FIG. 13 illustrates a teleconferencing system or sound stage
scenario where the user may need to control the pickup of sound from two adjacent speakers
that are separated from one another. FIG. 14 illustrates the touch surface enabled table of FIG. 8
with the very important talker ("VIP") which is the major focus of the pick-up beam from the
array. FIG. 15 illustrates a flowchart of a general method for representing a sound field in
physical space, according to one embodiment. FIG. 16 illustrates an example of a sound
03-05-2019
12
transducer array that some implementations may use. FIG. 17 illustrates an example of a device
that some implementations may use. FIG. 18 illustrates a system for representing and controlling
a sound field in physical space utilizing one or more tablets, according to one embodiment.
FIG. 19A illustrates a system including a sound transducer array and several devices. FIG. 19B
illustrates another system that includes a sound transducer array and several devices. FIG. 19C
illustrates another system that includes a sound transducer array, a central mobile device, and
several devices.
[0040]
[0061]
In the following description, specific details are set forth to provide a thorough understanding of
various aspects of the present disclosure. However, it will be understood by those skilled in the
art that these aspects may be practiced without these specific details. For example, circuits may
be shown in block diagrams in order not to obscure the aspects in unnecessary detail. In other
instances, well-known circuits, structures and techniques may not be shown in detail in order not
to obscure the aspects of the disclosure.
[0041]
[0062]
FIG. 1 illustrates an example of a typical system using a sound transducer array. As shown, the
system includes a sound transducer array 100 located on a conference table 101. Sound
transducer array 100 includes a number of microphones / speakers arranged to capture sound
(or audio) from different directions. As an example, four individuals 104-110 may be located
around the conference table. One individual 106 is speaking and this sound is captured by the
sound transducer array 100. However, the sound field of the captured sound is neither
symbolically nor visually represented on the conference table. As a result, there is no evidence
that the sound transducer array 100 is focusing and / or capturing sound to the person 106
speaking.
[0042]
[0063]
03-05-2019
13
Various aspects of methods and apparatus for controlling a sound field in physical space (e.g.,
physical surface) include a sound transducer array (herein "sound transducer array", "transducer
array", or simply "array ) Are described herein as being embodied in a system that includes a
touch surface enabled display table. An array may include groups of transducers (multiple
speakers and / or microphones). The array is a sound reproduction (in the configuration where
the array contains many speakers), or a sound pickup (in the configuration where the array
contains many microphones) disturbs spatial patterns concentrated in one direction from other
directions It may be configured to perform spatial processing of the signal on a group of
transducers to have at the same time reduce and have.
[0043]
[0064]
Providing an intuitive and tactile aspect of controlling the sound field according to one
embodiment, the efficiency of the user experience and control can be greatly enhanced. In one
approach, a modern touch surface enabled display table can be used to project a visualized
virtual sound field or sound projection pattern. The touch-table enabled display table has a table
surface that can both display images and detect touch input from the user. Thus, the user can
adjust the parameters associated with the visualized sound field directly and intuitively by
interacting with the touch surface while receiving visual feedback in real time or near real time.
Possible interaction styles include the user executing one or more commands on the touch
surface and adjusting these commands in accordance with visual feedback received from changes
in the display on the touch surface.
[0044]
[0065]
FIG. 2 illustrates a system that includes a sound transducer array and a device having a touch
sensitive screen. As shown in FIG. 2, the system may include a sound transducer array 200 and a
device 201.
[0045]
[0066]
03-05-2019
14
In some implementations, the sound transducer array 200 includes at least one processor,
memory, several microphones and speakers, at least one transceiver, several inductive elements,
a compass, at least one communication interface, And at least one identification marker. The
microphones and speakers of the sound transducer array 200 may be arranged to capture sound,
audio or microphone beams from different directions, and to transmit speaker beams displayed
in physical space, respectively. For example, the microphones and speakers may be arranged in a
linear, circular or other arrangement. Sound transducer array 200 may communicate with device
201 by using a communication interface and at least one transceiver. In some implementations,
the transceiver provides a wireless communication link (for receiving and transmitting data)
between the sound transducer array 200 and the device 201. Different implementations may use
different communication protocols to communicate between the sound transducer array 200 and
the device 201. Examples of communication protocols include near field communication (NFC),
Wi-Fi, Bluetooth (R), ZigBee (R), Digital Living Network Alliance (DLNA), and Airplay.
[0046]
[0067]
In some implementations, the compass provides a way for the sound transducer array 200 to
determine its orientation. Orientation information may be used internally or passed to other
devices (e.g., device 201) to determine the position and / or orientation of the sound transducer
array in some implementations. Inductive elements may also be used to determine the position
and / or orientation of the sound transducer array. For example, an inductive element may be
used by a device (e.g., device 201) to determine the position and orientation of a sound
transducer array on a touch sensitive screen. Identification markers may also be used to
determine the position and / or orientation of the microphone and speaker.
[0047]
[0068]
The above description is an overview of the possible components / elements of the sound
transducer array. A more detailed description of the components / elements of the sound
transducer array will be further described below with reference to FIG.
[0048]
[0069]
03-05-2019
15
Further, as shown in FIG. 2, device 201 may include touch sensitive screen 202. Touch sensitive
screen 202 may be used to provide tactile control of sound. Touch sensitive screen 202 may also
be used to sense and capture user movement (eg, finger movement on the touch screen). In some
implementations, the device 201 and touch sensitive screen 202 may be included in a surface
table.
[0049]
[0070]
In addition to touch sensitive screen 202, device 201 may also include at least one processor,
memory, at least one transceiver, and at least one communication interface. In some
implementations, the above components allow the device 201 to communicate with the sound
transducer array 200, local and remote computers, wireless devices (eg, phones), portable
computing devices (eg, tablets) . The components / elements of device 201 will be further
described below with reference to FIG.
[0050]
[0071]
Having provided an overview of the various devices and components of the system for
representing and controlling the sound field in physical space, a detailed description of how
these devices are used in such a system will be provided hereinafter. Will be explained. Some
exemplary use cases for arrays are described with reference to FIGS. 3-6. These use cases may be
displayed on a surface table, such as a conference table, or on one or more tablets, such as an
iPad (registered trademark), where each individual has a separate tablet. Have. A system where
each of the plurality of individuals utilizes a tablet is further described below with reference to
FIG.
[0051]
[0072]
FIG. 3 illustrates an example of an array utilized to create a privacy zone for voice
communication. As shown, the listener is in the "bright" zone and four potential eavesdroppers
are located in the "dark" zone. These zones are illustrated on physical space, so that multiple
03-05-2019
16
individuals sitting around this physical space (e.g., a conference table) have a pattern
representing "bright" zones and one or more "dark" And the patterns representing the zones can
be visualized. While an individual in the "bright" zone can hear the target sound, an individual (or
individuals) in the "dark" zone hears a muted version of the sound in the bright zone or fails to
recognize the sound in the bright zone. I can hear the possible version. The unrecognized version
of the sound may be a masked version of the sound in the bright zone. Beamforming techniques
or other spatial audio techniques may be applied in forming the bright and dark zones. Further
discussion of these techniques may be found in U.S. Patent Application Serial No. 13 / 665,592
entitled "Systems, Methods, and Apparatus for Producing a Directional Sound Field" filed October
31, 2012 (Attorney Docket No. 112639). It can be found in). The representation of bright and
dark zones may also be visually displayed as the sound is emanating from the speakers in the
sound transducer array. In such embodiments, the illustrated listener may be in a voice
communication call and may be using a sound transducer array to block potential eavesdroppers
from listening to the listener's conversation . For example, the display of bright and dark zone
patterns may be on the surface table, as further described herein.
[0052]
[0073]
A different variation of the privacy zone for voice communication is to use the same technique as
creating the privacy zone described above, but the listener and the eavesdropper all have, for
example, as shown in FIG. I'm listening to different programs. In such a variation, the illustrated
patterns on the physical space may each be a pattern representing a program. In different
configurations, the illustrated pattern may also represent a privacy zone for listening to a
program. For example, each person around physical space may be listening to a different sound
recording or sound stream (e.g., three radio stations). Bright and dark zones can be personalized
for each person. One possible implementation is to overlay three privacy zones, for example
when there are three people, and display representations of each of the three privacy zones in
one physical space. Thus, each person effectively has their own privacy zone relative to the other
person.
[0053]
[0074]
FIG. 4 illustrates an example of a spatialized vehicle navigation system using an array. In this
example, the sound of each of the voice commands indicating the route can be seen as
originating from the direction in which the listener should turn.
03-05-2019
17
[0054]
[0075]FIG. 5 illustrates an example of utilizing an array for a surround sound experience.
[0055]
[0076]FIG. 6 illustrates an example of utilizing an array to deliver multiple audio programs
simultaneously in different directions without interfering with one another.
[0056]
[0077]
Spatial processing of signals for the array to implement the above use case may be provided by
control software.
The user may interact with the control software using a conventional "keyboard and mouse"
interface to configure and control sound field generation and pickup pattern adjustment, but the
conventional interface still controls the sound field. Provide only an abstract approach.
Furthermore, this interface needs to be in the same location where the sound originated.
[0057]
[0078]
FIG. 7 illustrates an example of representing a sound field in physical space, according to one
embodiment. In this system, graphical representations of the sound captured by the sound
transducer array can be juxtaposed in the physical space of the sound field. The graphical
representation may be in the form of a sound projection pattern (eg, one or more physical wave
fields).
[0058]
[0079]
03-05-2019
18
As shown, the system can include a sound transducer array 700 and a device 701. In some
implementations, device 701 may be part of a table. Sound transducer array 700 may include
several microphones / speakers arranged to capture sound / audio from different directions. The
microphones may be arranged in a linear, circular or other arrangement. Device 701 may include
touch sensitive screen 702. Touch sensitive screen 702 may be for displaying in graphical space
a graphical representation of the sound field of the captured sound. Preliminary information
about the sound can also be displayed in text or in a chart around the tagged array. If there is a
need to change the sound, the touch screen can provide some control that allows the individual
(or the user) to modify the sound.
[0059]
[0080]
Sound transducer array 700 may communicate with device 701 using at least one wireless
communication link using a particular communication protocol. Examples of communication
protocols include near field communication (NFC), Wi-Fi, Bluetooth, ZigBee, Digital Living
Network Alliance (DLNA), and Airplay.
[0060]
[0081]
Further, FIG. 7 illustrates a device 701 having a touch-sensitive screen 702 as part of a
conference table, around which four individuals 704-701 participating in the meeting /
conference sit. . As shown in FIG. 7, sound transducer array 700 may be disposed on screen 702
of device 701.
[0061]
[0082]
From the sound transducer array 700, actual filter information for microphones and speakers is
available. From this information, a sound projection pattern or microphone pickup pattern in 3D
space (in this case the 2D horizontal space contains most of the information) can be calculated.
This information may be sent to the surface table via wireless protocols such as Bluetooth, near
field communication, DLNA, etc. as described above. Using this information, various computer
03-05-2019
19
graphical visualizations can be generated. General graphics may be 2D graphics consistent with
2D sound pressure or a prominent version thereof. The graphics original may be anchored at the
center of the sound transducer array 700 and may shift as it moves. In some implementations,
ultrasound / infrared / sound pulses may be used to determine the position of the sound
transducer array 700. In another implementation, the sound transducer array 700 may include
near field communication (NFC) tags, which allow the device 701 to determine the position of the
sound transducer array 700. In this way, the representations of the sound transducer array (ie,
symbolization and representation) can be arranged in space with the actual sound field.
[0062]
[0083]
FIG. 8 illustrates an example of a sound field visualization system in which sound fields are
symbolically represented by arrows. The arrows may extend from the sound transducer array
700 to the location of the captured sound.
[0063]
[0084]FIG. 9 illustrates an example of representing in sound space a sound field visualization
image showing that a sound transducer array is not directed to a speaking individual, according
to one embodiment.
[0064]
[0085]
Herein is a system that allows an individual (or user) to directly "touch the sound" to provide a
more intuitive and direct approach in interacting with sound field generation and pickup
patterns. Described.
In one aspect of the disclosed approach, the touch surface enabled table 920 illustrated in FIG. 9
includes a touch screen interface and other sensors on a touch screen display surface ( table
surface ) 922. sell. Tagged objects, such as a sound transducer array ( array ) 930 may be
placed on or near the table 920. The example described in FIG. 9 includes four individuals 902,
904, 906, and 908 participating in a teleconference and located around a touch surface enabled
table 920, such as a conference table. . One individual 906 is speaking, and the sound from that
individual is captured by the array 930.
03-05-2019
20
[0065]
[0086]
Array 930 may include several microphones / speakers arranged to capture / generate sound (or
audio) from / to different directions. The microphones may be arranged in a linear, circular or
other arrangement. Information and graphics about these objects may be displayed on the table
920. Examples, for example, spatial processing parameters (exemplified as "side lobe removal: 20
dB" and "beam width: 15 degrees"), identification information of the person who is speaking
(exemplified as "speaker: Heidi"), and A graphical information element 950 may be displayed on
the table surface 922 that depicts parameters for the teleconference, such as time information
(exemplified as "meeting remaining time: 20 minutes").
[0066]
[0087]
In addition, for tagged objects that are sound devices such as array 930, graphical
representations of sound emissions and / or microphone pick-up patterns may be visualized near
them. In some implementations, ultrasound / infrared / sound pulses are used to determine the
position of the array 930. In other implementations, the array 930 may include near field
communication (NFC) tags, which allow the table 920 to determine the position and relative
orientation of the array 930. Thus, the representation (ie, symbolization and representation) of
any sound projection pattern with respect to the array 930 may be spatially arranged with the
relevant actual sound fields. For example, a sound projection pattern (or field visualization image)
952 may be displayed on the table surface 922 for the representation of the sound field of the
captured sound. The sound field visualization image 952 may provide visual confirmation that
the array 930 is concentrating on the talking individual 906 and / or capturing sound. As can be
seen in FIG. 9, the sound field visualization image 952 may visually show participants who are
not directed to the array 930-if it should have been, to the talking individual 906. .
[0067]
[0088]
In one aspect of the disclosed approach, touch screen control software may be used to modify
the spatial processing characteristics of the array 930. Touch screen control software is
03-05-2019
21
implemented as part of a sound field visualization and control system ("Sound Field System")
1200, an example of which is depicted in FIG. In sound field system 1200, array 930 may
communicate with table 920 using any number of wireless communication links 1290 using
various communication technologies.
[0068]
[0089]
From the array 930, actual filter information is available for the microphones and speakers
contained therein. From this information, a sound projection pattern or microphone pickup
pattern in a three dimensional (3D) space (in this case a two dimensional (2D) plane horizontal to
the table surface 922 contains most of the information) can be determined. This information may
be transmitted to the surface table via the wireless communication link 1290. The table 920 is a
multi-touch and other command sensor ("multi-touch command 1224", while the table surface
922 displays the visual counterpart of the behavior of the sound projection pattern (shown as
"graphic visualization" 1222). Can also function as a tactile interface.
[0069]
[0090]
According to one embodiment, the user interface may be displayed on physical space to allow an
individual (or user) to select a desired action. The desired action is to select one or more sound
projection patterns for application of a second action thereto, or one or more virtual ones
consisting of one or more sound projection patterns It may include creating groupings. The
graphical representation may include an illustration of one or more virtual groupings, which may
include one or more virtual layers, and each of the one or more virtual groupings may include
one or more virtual layers. Corresponding to at least one of
[0070]
[0091]
An individual (or user) can adjust parameters related to the visualized sound projection pattern
directly and intuitively by interacting with the touch surface while receiving visual feedback in
real time or near real time . Possible interaction styles may include an individual executing one or
more commands on the touch surface. The commands may be used to manipulate a graphical
03-05-2019
22
representation of one or more sound projection patterns (eg, one or more physical wave fields)
associated with the sound transducer array. The commands may be in the form of text and may
be communications from a keyboard, mouse, button, bar, menu or their counterpart in software.
The command may also be a gesture that can be adjusted based on visual feedback received from
changes in the display on the touch surface. The gesture may be performed using an individual's
finger instead of a computer mouse. For gestures, selecting a sound projection pattern by
multiple (double or triple) tapping, drawing a circle more than once around the pattern, sending
different beams to different virtual layers, one or more Temporarily hiding multiple beams,
selecting one or more beams, grouping multiple sound projection patterns together and
manipulating them in a grouping fashion, and / or beam or grouping adjustments And may
include, but is not limited to, manipulating the application of additional graphic effects at the
time the beam or grouping is selected to be enhanced.
[0071]
[0092]
Returning to the example of FIG. 9, graphic visualization 1222 may be generated using
information received from array 930 (exemplified as sound input and output pattern, side
information 1232 ). General graphics may include 2D graphics consistent with 2D sound
pressure or a prominent version thereof. In the example illustrated in FIG. 9, the sound field
visualization image 952 may visually represent a 2D sound field for the captured sound. In one
aspect of the disclosed approach, the origin of the graphics may be anchored at the center of the
array 930 and may shift as the array 930 moves.
[0072]
[0093]
As shown in FIG. 9, the sound field visualization image 952 visually indicates to the participant
that the array 930 is not directed to the talking individual 906, even if it should be directed. ,
One of the participants may make a gesture to redirect the sound field of the array 930 towards
the individual 906. Control information, such as information based on multi-touch commands
1224 received from the table surface 922, can be arrayed by changing the characteristics of the
array 930 (exemplified as "sound field boundaries, intensity, direction, etc." 1234) It may be used
to control 930. Thus, the individual 906 may issue a dragging command, illustrated in FIG. 10
with arrow 1002, on the table surface 922 to redirect the array 930. FIG. 11 illustrates the
adjusted sound field of array 930, which is visualized as updated sound field visualization 1152,
which array 930 receives sound from person 906. Show participants that they are currently
03-05-2019
23
properly directed. Thus, the user literally does "touch the sound", redirects the beam pattern,
draws a new beam pattern, adjusts the parameter values, etc., and the sound field is manipulated.
You can see the visual changes that accompany it.
[0073]
[0094]
In another example, such as the teleconferencing system or sound stage scenario 1300
illustrated in FIG. 13, the user controls the pickup of sound from two adjacent speakers 1302,
1304 that are distant from one another. It may be necessary. Due to the large distance covered
between adjacent speakers 1302, 1304, the pick-up beam from array 930 is wide enough so that
the pick-up beam covers the speakers 1302, 1304, however It may need to be adjusted so that it
is not too broad to pick up spurious sounds like background noise. As illustrated by the sound
field visualization image 1352 displayed on the surface 922, the participant can see that the
pick-up beam from the array 930 is too narrow when the pick-up beam can be viewed visually I
can see it. That is, the user can see that the pick-up beam from the array 930 is not wide enough,
as configured. In this example, for example, using two hands, the user may gesture
"magnification" of the pick-up beam from array 930 to table 920, as illustrated by arrows 1392,
1394. The table 920 may then communicate to the array 930 that the user desires a wider beam.
Array 930 tracks the talking person so that the pick-up beam can be switched automatically to
be directed to the talking person.
[0074]
[0095]
The array 930 may track the talking person to automatically switch the pickup beam to be
directed to the talking person. Referring to the example of FIG. 9, there may be a vital speaker
("VIP") 1410 whose speech is the key focus of the pick-up beam from array 930, as may be
further modified in FIG. In such a situation, various aspects of the techniques described in this
disclosure lose some content from other individuals, such as individuals 902, 904, 906, and 908,
to focus the beam on VIP 1410. Things can be sacrificed. In this example, the pick-up beam from
the array 930 can be locked to track the VIP 1410 by a gesture such as triple tapping or drawing
a circle twice in the direction of the VIP 1410, the array 930 , VIP 1410 can only record. The
sound field visualization image 1452 may be displayed on the table surface 922 to indicate to
the parties the current direction of the pick-up beam from the array 930, as well as the lock icon
1454 and other visual indications as well. The beam may appear on the table surface 922 to
indicate that it is in lock mode. The user may use another gesture to unlock the pickup beam so
03-05-2019
24
that the array 930 can continue to beamform and / or track the person speaking.
[0075]
[0096]
The various aspects described herein may also be extended to tablets or other touch screen
devices, where the array may also be tagged and represented on the tablet device. For example,
many participants may each have a tablet device associated with the sound transducer array that
may be included as part of the system with the table 920.
[0076]
[0097]Having provided an overview of the various devices and components of the system for
representing the sound field in physical space, a detailed description of how these devices are
used in such a system is set forth. It will be.
[0077]
[0098]
FIG. 15 illustrates a flowchart of a schematic method for providing tactile control of sound,
according to one embodiment.
Sounds are controlled in real time or near real time and may include subsonic sounds, ultrasound
sounds, infrared sounds, and radio frequency sounds. The physical surface may be, for example,
a display screen, a touch sensitive screen, a touch sensing surface, or a tablet display.
[0078]
[0099]
As shown in FIG. 15, a graphical representation of one or more projection patterns associated
with the transducer array is displayed on a physical surface, where the transducer array is in
communication 1502 with physical space. Graphical representations of the one or more sound
projection patterns may be generated based on characteristics of the one or more sound
projection patterns. The characteristics may comprise at least one of beam width, direction,
03-05-2019
25
origin, frequency range, signal to noise ratio, receiver or generator type of one or more sound
projection patterns. The graphical representation may be a scale that is proportional to the
characteristics of the one or more sound projection patterns, wherein the scale comprises a oneto-one correspondence with the characteristics of the at least one sound projection pattern.
According to one aspect, the graphical representation may comprise a color scheme
corresponding to each of the one or more sound projection patterns, the color scheme
comprising a mapping of features to color space.
[0079]
[0100]
Once a graphical representation of the one or more sound projection patterns associated with the
transducer array is displayed on physical space, at least one command directed to the graphical
representation is detected 1504. As described above, the at least one command is a gesture, and
the gesture is tapping a number of times and drawing one or more circles around the one or
more sound projection patterns; Grouping together a number of sound projection patterns of the
one or more sound projection patterns to operate as a group.
[0080]
[0101]
According to one embodiment, detecting the at least one command may comprise detecting a
user interaction in physical space and decoding an interaction that determines a desired action
by the user. The desired operation may comprise hiding a graphical representation of one or
more sound projection patterns (eg, physical wave fields). Further, the user interface may be
displayed on physical space to allow the user to select the desired action; the desired action is
one for application of the second action thereto Or selecting a plurality of sound projection
patterns, or generating one or more virtual groups of one or more sound projection patterns. The
graphical representation may comprise an illustration of one or more virtual groups; the
illustration comprises one or more virtual layers, and each of the one or more virtual groupings
is one or more of the virtual layers Corresponding to at least one of
[0081]
[0102]According to one embodiment, a second interaction of the user in physical space is
decoded with a second desired action by the user, and the second desired action is applied to one
or more virtual groupings.
03-05-2019
26
[0082]
[0103]Based on the at least one command, one or more sound projection patterns of the sound
transducer array may be modified 1506, and graphical representations of the one or more sound
projection patterns on physical space may be updated 1508.
[0083]
[0104]According to one embodiment, the location of the sound transducer array with respect to
physical space may be determined with respect to physical space to generate an origin of the
graphical representation based on the location.
[0084]
[0105]
According to one embodiment, the orientation of the sound transducer array with respect to
physical space may be determined to generate an orientation vector as a reference on the
graphical representation based on the orientation.
It may be relative with respect to the sound transducer array.
[0085]
[0106]
According to one embodiment, the sound transducer array includes multiple microphones, the
multiple microphones being located in the car.
The sound transducer array further includes a number of speakers, the plurality of speakers
being located in the car.
[0086]
[0107]
03-05-2019
27
According to one embodiment, the sound transducer array may comprise a combined
microphone and speaker array.
Alternatively, the sound transducer array may comprise a separate microphone array and a
separate speaker array. The microphone array captures a microphone beam that can be
displayed in physical space in a first color, and the speaker array can transmit a speaker beam
that is displayed in physical space in a second color, where: Is different from the second color.
[0087]
[0108]
FIG. 16 illustrates an example of a sound transducer array that some implementations may use.
As shown in FIG. 16, the sound transducer array 1600 includes at least one processor /
processing circuit 1602, memory 1604, multiple microphones and speakers 1606, several input
devices 1608, at least one transceiver 1610, at least one user. An interface module 1612 and at
least one communication interface module 1614 may be included.
[0088]
[0109]
A microphone and speaker 1606 may be used to capture sound and / or voice and transmit a
speaker beam that is displayed in physical space. The input device 1608 allows the user to
literally touch the sound , redirect beam patterns, draw new beam patterns, adjust parameter
values, etc., and manipulate the sound field. Make it possible to see the visual changes associated
with
[0089]
[0110]
The transceiver 1610 may enable the sound transducer array to transmit and receive wireless
signals from other devices (eg, a phone, computer, tablet, sound transducer array). The sound
transducer array may include a number of transceivers, which allow the sound transducer array
to communicate (e.g., wirelessly) with different devices using different communication links and
03-05-2019
28
different communication protocols. In some implementations, user interface module 1612
provides an interface between microphone 1606, input device 1608, and processor / processing
circuit 1602. The user interface module 1612 may include several user interface modules (eg,
modules for each component). In some implementations, communication interface module 1614
provides an interface between transceiver 1610 and processor / processing circuit 1602.
Communication interface module 1614 may include several interface modules (eg, modules for
each transceiver).
[0090]
[0111]As shown in FIG. 16, the processor / processing circuitry 1602 may include a sound
detection module / circuit 1616, a position / orientation module / circuit 1618, a sound
processing module / circuit 1620, and a command module / circuit 1622.
[0091]
[0112]
The sound detection module / circuit 1616 may be for detecting and capturing sound.
In some implementations, the sound detection module / circuit 1616 may capture sound from
the microphone 1606. Position / orientation module / circuit 1618 may be for determining the
position and / or orientation of sound transducer array 1600 in some implementations. The
sound processing module / circuit 1620 may be for processing the sound captured by the
microphone 1606, calculating a sound projection pattern of the captured sound, and displaying a
graphical representation on physical space. The command module / circuit 1622 may be for
processing control information based on multi-touch commands (or gestures) to redirect the
sound fields of the array. Processing of the sound may include extracting an individual's sound
from the captured sound. The processing of the sound may also include, in some
implementations, identifying the identity of the person who is speaking.
[0092]
[0113]
FIG. 17 illustrates an example of a device that some implementations may use. As shown in FIG.
17, the device 1700 includes at least one processor / processing circuit 1702, a memory 1704, a
touch sensitive screen 1706, several input devices 1708, at least one transceiver 1710, at least
03-05-2019
29
one user interface module 1712, And at least one communication interface module 1714.
[0093]
[0114]
Touch sensitive screen 1706 may be used to display a graphical representation of the sound field
in physical space. Touch sensitive screen 1706 may also be used to receive input from one or
more users. Input device 1708 allows a user to input data and / or provide control of the device.
The transceiver 1710 may allow the device to transmit and receive wireless signals from other
devices (eg, phones, computers, tablets, sound transducer arrays). The device may include a
number of transceivers that allow the sound transducer array to communicate (eg, wirelessly)
with different devices using different communication links and different communication
protocols. In some implementations, user interface module 1712 provides an interface between
touch sensitive screen 1706, input device 1708, and processor / processing circuit 1702. The
user interface module 1712 may include several user interface modules (eg, modules for each
component). In some implementations, communication interface module 1714 provides an
interface between transceiver 1710 and processor / processing circuit 1702. Communication
interface module 1714 may include several interface modules (eg, modules for each transceiver).
[0094]
[0115]As shown in FIG. 17, the processor / processing circuit 1702 includes a sound detection
module / circuit 1716 for interfacing with the sound transducer array, a position / orientation
module / circuit 1718 for determining the position of the sound transducer array, the sound
Processing modules / circuits 1720 and command modules / circuits 1722 may be included.
[0095]
[0116]
The sound detection module / circuit 1716 may be for interfacing with a sound transducer array.
The position / orientation module / circuit 1718 may be for determining the position and / or
orientation of the sound transducer array in some implementations. The sound processing
module / circuit 1720 may be for processing sound captured by the microphone in some
implementations. The microphone may be a microphone of a sound transducer array coupled to
03-05-2019
30
the device. Processing of the sound may include extracting an individual's sound from the
captured sound. The processing of the sound may also include, in some implementations,
identifying the identity of the person who is speaking. The command module / circuit 1722 may
be for processing control information based on multi-touch gestures to redirect the sound field of
the array.
[0096]
[0117]
FIG. 18 illustrates a system for representing and controlling a sound field in physical space
utilizing one or more tablets, according to one embodiment. As shown in FIG. 18, the three
individuals 1800-1806 may each have tablets 1808-1812 that can communicate directly with
each other or with the hub 1814. Each tablet may have its own sound transducer array (i.e.,
microphones and speakers) 1809-1813 that may be located internally within each tablet or
external to each tablet.
[0097]
[0118]
FIG. 19A illustrates another configuration that may be implemented using additional devices. As
shown in FIG. 19A, sound transducer array 1900 is in communication with a number of mobile
devices 1902-1908 (eg, handsets, tablets). Each of these mobile devices may be associated with a
respective user / person 1910-1916. Mobile devices may be handsets, tablets, phones, smart
phones, portable electronic devices, electronic notepads, and / or personal digital assistants
(PDAs). Mobile devices may be able to communicate with other devices via cellular networks and
/ or other communication networks.
[0098]
[0119]
The mobile devices 1902-1908 allow the user to "check in" and / or register with the sound
transducer array 1900 (e.g., check in using NFC by tapping the mobile device near the
microphone array 1900) Can be made possible. However, different implementations may
check in and / or register with the sound transducer array 1900 in different ways. For
example, the mobile device may use another communication protocol or link (eg, Bluetooth, WiFi)
03-05-2019
31
to communicate with the sound transducer array 1900. Once the user / mobile device is
"checked in" or registered, the mobile device can be tracked by the sound transducer array using
ultrasound / infrared / sound pulses (or other known tags), The sound transducer array 1900
allows the mobile device position / location to be continually known, which results in the sound
transducer array 1900 knowing the position / location of the user associated with the mobile
device being tracked. Means that
[0099]
[0120]
Each mobile device 1902-1908 has a graphical user interface on its respective screen that allows
the user to locate the location and / or location of the user and / or device (eg, a tablet) relative
to the sound transducer array 1900. It can be provided. That is, the user may indicate the user's
location on the screen of the mobile device, which is then transmitted to the sound transducer
array 1900 and / or another device (eg, device 1001) (eg, Bluetooth, WiFi Through). The
graphical user interface on the screen of the mobile device (e.g., mobile devices 1902-1908) may
also provide / display text (e.g., transcribed captured audio). Such text may be provided / sent
from sound transducer array 1900 and / or from another device in communication with sound
transducer array 1900.
[0100]
[0121]
The sound transducer array 1900 may be located on a table (not shown) or on a touch sensitive
screen (not shown) of devices included in the table. Similarly, mobile devices 1902-1908 may be
placed on the table or on touch sensitive screens of devices included in the table.
[0101]
[0122]
FIG. 19B illustrates another configuration that may be implemented using different devices. FIG.
19B shows that the sound transducer array 1900 is located on the touch sensitive screen 1922
of the device 1920 and that the position of the user is specified on the graphical user interface
on the touch sensitive screen 1922 of the device 1920 Similar to FIG. 19A except for As shown
in FIG. 19B, mobile devices 1902-1908 (eg, handset, tablet) are in communication with sound
03-05-2019
32
transducer array 1900 and / or device 1920 (eg, using Bluetooth, WiFi).
[0102]
[0123]
As further shown in FIG. 19B, the user can specify their position relative to the sound transducer
array 1900 by specifying the position / location of the graphical user interface elements. As
shown in FIG. 19B, four graphical user interface elements 1930-1936 displayed on the graphical
user interface are shown on screen 1922. Each graphical user interface element 1930-1936 may
be associated with a particular user and / or mobile device. The graphical user interface element
may include text or an image (eg, an ID, a name, a photo) identifying the user with which the user
interface element is associated. Different implementations may present graphical user interface
elements in different ways. In some implementations, graphical user interface elements are
presented as the user taps the screen and / or logs in. In some implementations, the graphical
user interface element checks in the sound transducer array 1900 and / or the device 1920
by the user using one of the exemplary methods described above in FIG. 19A. And / or may be
presented when registering (eg, checking in using NFC by tapping on the sound transducer array
1900 and / or the device 1920). Because mobile devices 1902-1908 are in communication with
sound transducer array 1900 and / or device 1920, mobile devices 1902-1908 may receive data
from one or both of sound transducer array 1900 and device 1920. Such data may be presented
/ displayed on the screen of mobile devices 1902-1908. Examples of data include, in some
implementations, transcribed text of captured speech.
[0103]
[0124]
In some implementations, device 1920 is a mobile device (eg, a tablet, handset). This may be
possible if the screen size of the mobile device is large enough for the sound transducer array
1900 to be placed on the screen of the mobile device. In such cases, the mobile device may
function as a central mobile device (eg, central tablet) in which the sound transducer array 1900
is disposed. FIG. 19C illustrates an example configuration that includes a central mobile device
(eg, a central tablet). As shown in FIG. 19C, mobile devices 1902-1908 (eg, handsets, tablets) are
in communication with sound transducer array 1900 and / or central mobile device 1940 (eg,
using Bluetooth, WiFi). Central mobile device 1940 includes touch sensitive screen 1942 on
which sound transducer array 1900 may be disposed. It should be noted that in some
implementations, mobile devices 1902-1908 may all function as central mobile devices.
03-05-2019
33
[0104]
[0125]
The configuration of FIG. 19C is that the device 1920 of FIG. 19B, which may be a surface table /
surface tablet, may function as a central mobile device in communication with other mobile
devices (eg, mobile devices 1902-1908). It may be similar to the configuration of FIG. 19B except
that it is replaced with a mobile device 1940 (eg, a tablet, a smartphone). In some
implementations, the operation of the configuration shown in FIG. 19C is similar to the operation
of the configuration shown and described in FIGS. 19A-19B. That is, for example, in some
implementations, the user checks in to the sound transducer array 1900 and / or the central
mobile device 1940 using NFC or other communication protocol / link (eg, Bluetooth, WiFi), May
register and / or log in.
[0105]
[0126]
The term "exemplary" as used herein is used in the sense of "providing an example, instance, or
illustration". Any implementation or aspect described herein as "exemplary" is not necessarily to
be construed as preferred or advantageous over other aspects of the disclosure. Similarly, the
term "aspect" does not require that all aspects of the present disclosure include the described
feature, advantage, or mode of operation. The term "coupled" is used herein to refer to direct or
indirect coupling between two objects. For example, if object A is in physical contact with object
B and object B is in contact with object C, then objects A and C are not in direct physical contact
with one another. However, it can still be considered as being bound to one another. For example,
the substrate of the die may be bonded to the packaging substrate even if the substrate of the die
is not in direct physical contact with the packaging substrate.
[0106]
[0127]
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 and / or 19 illustrated in FIG. And / or
one or more of the functions may be combined and / or rearranged into a single component,
step, feature, or function, or be incorporated into several components, steps, or functions sell.
Additional elements, components, steps and / or functions may also be added without departing
from the invention.
03-05-2019
34
[0107]
[0128]
It should also be noted that these embodiments may be described as a process that is depicted as
a flowchart, flow diagram, structure diagram, or block diagram. Although the flowcharts may
describe the operations as a sequential process, many of the operations may be performed in
parallel or simultaneously. In addition, the order of operations can be rearranged. The process
ends when its operation is complete. A process may correspond to a method, a function, a
procedure, a subroutine, a subprogram, etc. When the process corresponds to a function, its
termination corresponds to the return of the function to the calling or main function.
[0108]
[0129]
Additionally, a storage medium may represent one or more devices for storing data, such as read
only memory (ROM), random access memory (RAM), magnetic disk storage media, optical
storage media, Included are flash memory devices, and / or other machine-readable media for
storing information. The terms "machine-readable medium" or "machine-readable storage
medium" include portable or fixed storage devices, optical storage devices, wireless channels, and
the ability to store, contain, or transport instruction (s) and / or data. And various other media,
including, but not limited to.
[0109]
[0130]
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware,
microcode, or any combination thereof. When implemented in software, firmware, middleware or
microcode, the program code or code segments to perform the necessary tasks may be stored on
a machine readable medium such as a storage medium or other storage device (s) . The processor
can perform the necessary tasks. A code segment may represent a procedure, a function, a
subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any
combination of instructions, data structures, or program segments. A code segment may be
coupled to a hardware circuit or another code segment by passing and / or receiving information,
data, arguments, parameters, or memory content. Information, arguments, parameters, data, etc.
03-05-2019
35
may be passed, forwarded, or transmitted via any suitable means including memory sharing,
message passing, token passing, network transmission, etc. Can.
[0110]
[0131]
The various illustrative blocks, modules, circuits (eg, processing circuits), elements, and / or
components described in connection with the examples disclosed herein may be general purpose
processors, digital signal processors (DSPs) Application specific integrated circuits (ASICs), field
programmable gate arrays (FPGAs) or other programmable logic components, discrete gates or
transistor logic, discrete hardware components, or to perform the functions described herein. It
may be implemented or performed in any combination of these designed. A general purpose
processor may be a microprocessor, but in the alternative, the processor may be any
conventional processor, controller, microcontroller, or state machine. The processor may also be
a combination of computing components, eg, a combination of a DSP and one microprocessor,
many microprocessors, one or more microprocessors in conjunction with a DSP core, or other
such configuration. It can be implemented.
[0111]
[0132]
The methods or algorithms described in connection with the examples disclosed herein may be
directly in hardware, in a software module executable by a processor, or both, in the form of a
processing unit, programming instructions, or other instructions. And may be contained within a
single device or distributed across multiple devices. The software module may reside in RAM
memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk,
removable disk, CD-ROM, or any other form of storage medium known in the art. . A storage
medium may be coupled to the processor such that the processor can read information from, and
write information to, the storage medium. In the alternative, the storage medium may be integral
to the processor.
[0112]
[0133]
Those skilled in the art will appreciate that the various illustrative logic blocks, modules, circuits,
03-05-2019
36
and algorithmic steps described in connection with the embodiments disclosed herein may be
electronic hardware, computer software, or a combination of both. It will be further recognized
that it can be implemented as To clearly illustrate this hardware and software compatibility,
various illustrative components, blocks, modules, circuits, and steps have been described above
generally in terms of their functionality. Whether such functionality is implemented as hardware
or software depends upon the particular application and design constraints imposed on the
overall system.
[0113]
[0134]
Various features of the invention described herein may be implemented in different systems
without departing from the invention. It should be noted that the foregoing aspects of the
disclosure are merely examples and should not be construed as limiting the present invention.
The description of the aspects of the present disclosure is intended to be illustrative rather than
limiting the scope of the claims. As such, the present teachings can be readily applied to other
types of devices, and many alternatives, modifications, and variations will be apparent to those
skilled in the art.
03-05-2019
37
1/--страниц
Пожаловаться на содержимое документа