close

Вход

Забыли?

вход по аккаунту

JP2013058896

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2013058896
Abstract: In a case where a plurality of users simultaneously view the same content, audio of
different volume or sound quality is reproduced to each user. SOLUTION: A plurality of
oscillation devices 12 for outputting modulated waves of parametric speakers, a display unit 40
for displaying image data, a recognition unit 30 for recognizing positions of a plurality of users,
audio data linked to image data The controller 20 controls the oscillation device 12 to reproduce
the audio, and the controller 20 directs the user to the position of each user recognized by the
recognition unit 30, and sets the voice according to the volume or the sound quality set for each
user. The oscillator 12 is controlled to reproduce data. [Selected figure] Figure 2
Electronic device
[0001]
The present invention relates to an electronic device provided with an oscillating device.
[0002]
As a technique regarding the electronic device provided with the audio ¦ voice output means,
there exist some which are described, for example in patent documents 1-5.
The technology described in Patent Document 1 measures the distance between the portable
terminal and the user, and controls the brightness of the display and the volume of the speaker
based on this. The technology described in Patent Document 2 determines whether the input
03-05-2019
1
speech signal corresponds to speech or non-speech by the music property detection means and
the speech property detection means, and outputs based on this. It is to adjust the sound. The
technology described in Patent Document 3 is to reproduce a sound suitable for both a deaf and
a hearing person by a speaker control device provided with a highly directional speaker and a
normal speaker.
[0003]
Patent Documents 4 and 5 describe techniques relating to parametric speakers. The technique
described in Patent Document 4 is to control the frequency of a carrier wave signal of a
parametric speaker in accordance with a demodulation distance. The technology described in
Patent Document 5 relates to a parametric audio system having a sufficiently high carrier
frequency.
[0004]
JP, 2005-202208, JP, 2010-231241, JP, 2008-197381, JP, 2006-81, 117, JP, 2010-51039, A
[0005]
When a plurality of users view the same content at the same time, it is required to reproduce
voices of different volume or sound quality for each user.
[0006]
An object of the present invention is to reproduce voice of different volume or sound quality for
each user when a plurality of users view the same content simultaneously.
[0007]
According to the present invention, a plurality of oscillation devices for outputting modulated
waves of a parametric speaker, a display unit for displaying first image data, a recognition unit
for recognizing positions of a plurality of users, and the first image data A control unit configured
to control the oscillation device to reproduce tied voice data, wherein the control unit is set for
each of the users toward the position of each of the users recognized by the recognition unit. An
electronic device is provided for controlling the oscillating device to reproduce the audio data by
volume or sound quality.
03-05-2019
2
[0008]
According to the present invention, when a plurality of users view the same content at the same
time, voices of different volume or sound quality can be reproduced for each user.
[0009]
It is a schematic diagram which shows the operation ¦ movement method of the electronic device
which concerns on 1st Embodiment.
It is a block diagram which shows the electronic device shown in FIG.
It is a top view which shows the parametric speaker shown in FIG.
FIG. 4 is a cross-sectional view showing the oscillation device shown in FIG. 3;
It is sectional drawing which shows the piezoelectric vibrator shown in FIG.
It is a flowchart which shows the operation ¦ movement method of the electronic device shown in
FIG. It is a block diagram showing the electronic device concerning a 2nd embodiment.
[0010]
Hereinafter, embodiments of the present invention will be described with reference to the
drawings. In all the drawings, the same components are denoted by the same reference numerals,
and the description thereof will be appropriately omitted.
[0011]
FIG. 1 is a schematic view showing an operation method of the electronic device 100 according
to the first embodiment. FIG. 2 is a block diagram showing the electronic device 100 shown in
03-05-2019
3
FIG. An electronic device 100 according to the present embodiment includes a parametric
speaker 10 having a plurality of oscillation devices 12, a display unit 40, a recognition unit 30,
and a control unit 20. The electronic device 100 is, for example, a television, a display device
with digital signage, or a portable terminal device. As a portable terminal device, a mobile
telephone etc. are mentioned, for example.
[0012]
The oscillator 12 outputs an ultrasonic wave 16. The ultrasonic wave 16 is a modulation wave of
a parametric speaker. The display unit 40 displays image data. The recognition unit 30
recognizes the positions of a plurality of users. The control unit 20 controls the oscillation device
12 to reproduce the audio data associated with the image data displayed by the display unit 40.
The control unit 20 controls the oscillation device 12 to reproduce the voice data with the
volume and sound quality set for each user toward the position of each user recognized by the
recognition unit 30. The configuration of the electronic device 100 will be described in detail
below with reference to FIGS.
[0013]
As shown in FIG. 1, the electronic device 100 includes a housing 90. The parametric speaker 10,
the display unit 40, the recognition unit 30, and the control unit 20 are disposed, for example,
inside the housing 90 (not shown).
[0014]
The electronic device 100 receives or stores content data. The content data includes audio data
and image data. Image data of the content data is displayed by the display unit 40. Further, audio
data of the content data is associated with the image data and output by the plurality of
oscillation devices 12.
[0015]
As shown in FIG. 2, the recognition unit 30 includes an imaging unit 32 and a determination unit
03-05-2019
4
34. The imaging unit 32 captures an area including a plurality of users to generate image data.
The determination unit 34 determines the position of each user by processing the image data
captured by the imaging unit 32. The determination of the position of each user is performed, for
example, by individually storing and storing in advance feature quantities identifying the
respective users, and collating the feature quantities with the image data. The feature amount
includes, for example, the size of the distance between the eyes, or the size and shape of a
triangle connecting the eyes and the nose. The recognition unit 30 can also specify, for example,
the position of the user's ear. In addition, the recognition unit 30 may have a function of
automatically following the user and determining the position of the user when the user moves in
the area imaged by the imaging unit 32.
[0016]
As shown in FIG. 2, the electronic device 100 includes a distance calculation unit 50. The
distance calculation unit 50 calculates the distance between each user and the oscillation device
12. As shown in FIG. 2, the distance calculation unit 50 includes, for example, a sound wave
detection unit 51. In this case, the distance calculation unit 50 calculates the distance between
each user and the oscillation device 12 as follows, for example. First, ultrasonic waves for sensors
are output from the oscillation device 12. Next, the distance calculation unit 50 detects the
ultrasonic waves for the sensor reflected from each user. Then, the distance between each user
and the oscillation device 12 is calculated based on the time from the output of the sensor
ultrasonic wave by the oscillation device 12 to the detection by the sound wave detection unit
51. When the electronic device 100 is a mobile phone, the sound wave detection unit 51 can be
configured by, for example, a microphone.
[0017]
As shown in FIG. 2, the electronic device 100 includes a setting terminal 52. The setting terminal
52 sets, for each user, the volume or sound quality of the audio data associated with the image
data displayed by the display unit 40, for example. The setting of the volume or sound quality by
the setting terminal 52 is performed by each user, for example. As a result, it is possible to set,
for each user, an audio having an optimal volume and sound quality for each user. The setting
terminal 52 is incorporated, for example, inside the housing 90. Further, the setting terminal 52
may not be incorporated in the inside of the housing 90. In this case, a plurality of setting
terminals 52 can be provided for each user.
03-05-2019
5
[0018]
As shown in FIG. 2, the control unit 20 is connected to the plurality of oscillation devices 12, the
recognition unit 30, the display unit 40, the distance calculation unit 50, and the setting terminal
52. The control unit 20 controls the oscillation device 12 to reproduce voice data with the
volume and sound quality set for each user toward the position of each user. The volume of
audio data to be reproduced is controlled, for example, by adjusting the output of audio data.
Further, the sound quality of the reproduced audio data is controlled, for example, by changing
the setting of an equalizer for processing the audio data before modulation. The control unit 20
may be configured to control only one of the volume and the sound quality.
[0019]
The control of the oscillation device 12 by the control unit 20 is performed, for example, as
follows. First, the feature amount of each user is registered in association with the ID. Next, the
volume and sound quality set for each user are stored in association with the ID of each user.
Then, the ID corresponding to the specific volume and sound quality setting is selected, and the
feature amount associated with the selected ID is read out. Then, the user having the read
identification amount is selected by processing the image data generated by the imaging unit 32.
The voice corresponding to the selected setting is reproduced to the user. When the position of
the user's ear is specified by the recognition unit 30, the control unit 20 can also control the
oscillation device 12 so as to output the ultrasonic wave 16 toward the position of the user's ear.
[0020]
The control unit 20 adjusts the volume and sound quality of the audio data to be reproduced for
each user based on the distance between the user and the oscillation device 12 calculated by the
distance calculation unit. That is, control unit 20 controls oscillation device 12 to reproduce
voice data at the position of each user based on the distance between each user and oscillation
device 12 with the volume or sound quality set for each user. Do. For example, by controlling the
output of the audio data based on the distance between each user and the oscillation device 12,
the volume of the audio data to be reproduced is adjusted. Thus, audio data can be reproduced
for each user at an appropriate volume set by each user. Further, the sound quality of the audio
data to be reproduced is adjusted by processing the audio data before modulation based on the
distance between each user and the oscillation device 12, for example. Thus, audio data can be
reproduced for each user with an appropriate sound quality set by each user.
03-05-2019
6
[0021]
FIG. 3 is a plan view showing the parametric speaker 10 shown in FIG. The parametric speaker
10 is configured, for example, by arranging a plurality of oscillating devices 12 in an array, as
shown in FIG.
[0022]
FIG. 4 is a cross-sectional view showing the oscillation device 12 shown in FIG. The oscillation
device 12 includes a piezoelectric vibrator 60, a vibrating member 62, and a support member 64.
The piezoelectric vibrator 60 is provided on one surface of the vibrating member 62. The
support member 64 supports the edge of the vibrating member 62.
[0023]
The control unit 20 is connected to the piezoelectric vibrator 60 via the signal generation unit
22. The signal generation unit 22 generates an electrical signal to be input to the piezoelectric
vibrator 60. The control unit 20 controls the signal generation unit 22 based on the information
input from the outside, thereby controlling the oscillation of the oscillation device 12. The control
unit 20 inputs a modulation signal as a parametric speaker to the oscillation device 12 via the
signal generation unit 22. At this time, the piezoelectric vibrator 60 uses a sound wave of 20 kHz
or more, for example, 100 kHz as a transport wave of the signal.
[0024]
FIG. 5 is a cross-sectional view showing the piezoelectric vibrator 60 shown in FIG. As shown in
FIG. 4, the piezoelectric vibrator 60 includes a piezoelectric body 70, an upper electrode 72 and a
lower electrode 74. The piezoelectric vibrator 60 is, for example, circular or elliptical in plan
view. The piezoelectric body 70 is sandwiched between the upper electrode 72 and the lower
electrode 74. Further, the piezoelectric body 70 is polarized in the thickness direction. The
piezoelectric body 70 is made of a material having a piezoelectric effect, and is made of, for
example, lead zirconate titanate (PZT) or barium titanate (BaTiO3), which is a material having
03-05-2019
7
high electromechanical conversion efficiency. The thickness of the piezoelectric body 70 is
preferably 10 μm to 1 mm. The piezoelectric body 70 is made of a brittle material. Therefore, if
the thickness is less than 10 μm, breakage or the like is likely to occur during handling. On the
other hand, when the thickness exceeds 1 mm, the electric field strength of the piezoelectric
body 70 is reduced. This leads to a decrease in energy conversion efficiency.
[0025]
The upper electrode 72 and the lower electrode 74 are made of a material having electrical
conductivity, such as silver or silver / palladium alloy. Silver is a general purpose material with
low resistance, and is advantageous in terms of manufacturing cost and manufacturing process.
In addition, a silver / palladium alloy is a low resistance material excellent in oxidation resistance
and excellent in reliability. The thickness of the upper electrode 72 and the lower electrode 74 is
preferably 1 μm to 50 μm. If the thickness is less than 1 μm, uniform molding becomes
difficult. On the other hand, if it exceeds 50 μm, the upper electrode 72 or the lower electrode
74 becomes a constraining surface with respect to the piezoelectric body 70, resulting in a
decrease in energy conversion efficiency.
[0026]
The vibrating member 62 is made of a material such as metal or resin which has a high elastic
modulus with respect to ceramic which is a brittle material. As a material which comprises the
vibration member 62, general purpose materials, such as phosphor bronze or stainless steel, are
mentioned, for example. The thickness of the vibrating member 62 is preferably 5 μm to 500
μm. The longitudinal elastic modulus of the vibrating member 62 is preferably 1 GPa to 500
GPa. If the longitudinal elastic modulus of the vibrating member 62 is excessively low or high,
the characteristics and reliability as a mechanical vibrator may be impaired.
[0027]
In the present embodiment, sound is reproduced using the operation principle of the parametric
speaker. The principle of operation of the parametric speaker is as follows. The principle of
operation of the parametric speaker is that ultrasonic waves with AM modulation, DSB
modulation, SSB modulation, FM modulation are emitted into the air, and the audible sound
appears due to non-linear characteristics when the ultrasonic waves propagate in the air Sound
03-05-2019
8
reproduction. The term "nonlinear" as used herein means transition from laminar flow to
turbulent flow when the Reynolds number represented by the ratio of the inertial action of the
flow to the viscous action increases. That is, since the sound wave is finely disturbed in the fluid,
the sound wave is non-linearly propagating. In particular, when ultrasonic waves are emitted into
the air, harmonics associated with the non-linearity are significantly generated. In addition,
sound waves are in a dense / dense state in which molecular groups in the air are mixed in
density. If it takes time for air molecules to recover more than compression, air that can not be
recovered after compression will collide with continuously propagating air molecules, producing
shock waves and producing audible sounds. The parametric speaker can form a sound field only
around the user and is excellent in terms of privacy protection.
[0028]
Next, the operation of the electronic device 100 according to the present embodiment will be
described. FIG. 6 is a flow chart showing an operation method of the electronic device 100
shown in FIG. First, for each user, the volume and sound quality of the audio data linked to the
image data displayed by the display unit 40 are set (S01). Next, the display unit 40 displays the
image data (S02).
[0029]
Next, the recognition unit 30 recognizes positions of a plurality of users (S03). Next, the distance
calculation unit 50 calculates the distance between each user and the oscillation device 12 (S04).
Next, based on the distance between each user and the oscillation device 12, the volume and
sound quality of the audio data reproduced for each user are adjusted (S05).
[0030]
Next, toward the position of each user, the voice data linked to the image data displayed by the
display unit 40 is reproduced with the volume or sound quality set for each user (S06). When the
recognition unit 30 recognizes the position of the user by tracking, the control unit 20 controls
the direction in which the oscillation device 12 reproduces voice data as needed based on the
position of the user recognized by the recognition unit 30. You may do it.
03-05-2019
9
[0031]
Next, the effects of the present embodiment will be described. According to the present invention,
the oscillation device outputs the modulation wave of the parametric speaker. Further, the
control unit controls the oscillation device to reproduce the voice data linked to the image data
displayed by the display unit with the volume or the sound quality set for each user toward the
position of each user. According to the above configuration, the parametric speaker with high
directivity reproduces voice data of volume or sound quality set for each user to each user.
Therefore, when a plurality of users view the same content at the same time, it is possible to
reproduce voice of different volume or sound quality for each user.
[0032]
FIG. 7 is a block diagram showing the electronic device 102 according to the second
embodiment, which corresponds to FIG. 2 according to the first embodiment. The electronic
device 102 according to the present embodiment is the same as the electronic device 100
according to the first embodiment except that a plurality of detection terminals 54 are provided.
[0033]
The plurality of detection terminals 54 are held by each of the plurality of users. Then, the
recognition unit 30 recognizes the position of the user by recognizing the position of the
detection terminal 54. The recognition of the position of the detection terminal 54 by the
recognition unit 30 is performed, for example, when the recognition unit 30 receives a radio
wave emitted from the detection terminal 54. In addition, the recognition unit 30 may have a
function of automatically tracking the user to determine the position of the user when the user
holding the detection terminal 54 moves. When a plurality of setting terminals 52 are provided
for each user, the detection terminal 54 is formed integrally with the setting terminal 52 and has
a function capable of selecting the volume and sound quality of audio data to be reproduced for
each user. May be
[0034]
In addition, the recognition unit 30 may include the imaging unit 32 and the determination unit
03-05-2019
10
34. The imaging unit 32 generates image data obtained by imaging an area including the user,
and the determination unit 34 processes the image data, whereby the user's detailed ear position
and the like can be identified. Therefore, the position of the user can be recognized more
accurately by using the detection terminal 54 together with the position detection.
[0035]
In the present embodiment, control of the oscillation device 12 by the control unit 20 is
performed as follows. First, the ID of each detection terminal 54 is registered in advance. Next,
the volume and sound quality set for each user are associated with the ID of the detection
terminal 54 held by each user. Next, each detection terminal 54 transmits an ID indicating each
detection terminal 54. The recognition unit 30 recognizes the position of the detection terminal
54 based on the direction in which the ID has been transmitted. Then, the voice data
corresponding to the setting is reproduced to the user holding the detection terminal 54 having
the ID corresponding to the setting of the specific sound volume and the sound quality.
[0036]
Also in this embodiment, the same effect as that of the first embodiment can be obtained.
[0037]
Although the embodiments of the present invention have been described above with reference to
the drawings, these are merely examples of the present invention, and various configurations
other than the above can also be adopted.
[0038]
DESCRIPTION OF SYMBOLS 10 parametric speaker 12 oscillation apparatus 16 sound wave 20
control part 22 signal generation part 30 recognition part 32 imaging part 34 determination part
40 display part 50 distance calculation part 51 sound wave detection part 52 setting terminal 54
detection terminal 60 piezoelectric vibrator 62 vibration member 64 support Member 70
Piezoelectric body 72 Upper electrode 74 Lower electrode 90 Case 100 Electronic device 102
Electronic device
03-05-2019
11
1/--страниц
Пожаловаться на содержимое документа