JP2000201393

Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2000201393
[0001]
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a
hearing device that localizes extrafrontal sound images before and after left and right including
the rear part by means of a headphone device.
[0002]
2. Description of the Related Art In the case of listening to voice or audio accompanying video, it
is recorded on the premise that it is reproduced by speakers placed on both sides, so-called 2D.
When listening to the above-mentioned audio signal or audio signal accompanying a video, etc.
using a conventional headphone device, and even if the audio of a computer game etc. is also
recognized similarly on the left and right, the sound image localization positions before and after
are unnatural and unnatural It was heard as a sound image.
[0003]
It is a headphone device to localize the above phenomena and anteroposterior relationship, and it
is necessary to perform huge processing such as adding and subtracting processing such as
filtering, phase processing, cross fading processing, level processing, etc. There was a problem.
[0004]
SUMMARY OF THE INVENTION The present invention solves the above-mentioned problems,
10-05-2019
1
and has an object of determining sound image localization in the front, rear, left, and right, and
providing a sound field that can be felt.
[0005]
In order to solve the above-mentioned problems, the object is achieved by arranging speakers 1
and 2 as sound sources by acoustic signals before and after the ear.
[0006]
While reexamining the well-known facts that have already been published, the principle is
expanded and proven to be able to position the front and back sound image.
[0007]
On the basis of the principles already published prior to the description of the invention, the
theory will be expanded to illustrate the context of the off-head sound localization.
Figure 1
[0008]
The sound wave A emitted from the virtual sound source O travels through space and reaches
the front and back of the ear, and the sound wave reaching the front reaches from the earhole to
the tympanic membrane, which means that the brain makes an audible sound.
The sound wave reaching the back side passes through the temporal bone and partially goes
around the ear and reaches the front side to become the same as above.
[0009]
The arrival sound wave has a different transfer function depending on the path before and after
the ear.
10-05-2019
2
Therefore, the transfer functions etc. before and after are defined as follows.
Transfer function when the sound signal from O passes in front of the ear = Transfer function
when the sound signal from FαO passes after the ear = Listen sound from the front of Rα ear =
Listen sound from the front of Fex1 ear = Rex1 Then, it is expressed by Fex1 = Fα · ARex1 = Rα
· A.
[0010]
Based on the preceding paragraph, considering the case of listening with headphones as shown
in FIG. 1, there is also a transfer function in the path to the ear to hear the audio signal emitted
by the headphones, this is defined as the intra-head transfer function Do. In-head transfer
function from the front of the ear = In-head transfer function from the back of the FM ear =
Sounding signal of the RM in-ear headphones = Sounding signal of the post-ear headphones in
the post ear = Rβ = Rex2Fex2 = Fβ · FMRex2 = Rβ · RM In order to make a sound from the
sound source O be heard by the sound signal from the headphones, if Fex2 = Fex1 Rex2 = Rex1,
then the sound source can be localized.
[0011]
The listening sound from the front and back of Fex1 and Rex1 ear is also considered as follows.
The process until the sound wave emitted from the virtual sound source O is heard is divided into
two parts, the first half and the second half, and defined as follows. The transfer function from
the virtual sound source O to the front of the ear = the transfer function from the virtual sound
source O to the back of the ear = Rr1 the acoustic signal in front of the ear = FrU the acoustic
signal after the ear = RrU, FrU = Fr1 · ARrU It becomes = Rr1 · A. In the second half, there are inhead transfer functions defined in the previous section, in-head transfer functions from the front
of the ear = FM transfer functions from the back of the FM ear = RM. Fex1=FrU・
FMRex1=RrU・RM となる。
[0012]
10-05-2019
3
From the above, if Fex2 = Fex1Rex2 = Rex1 is substituted, Fβ · FM = FrU · FMRβ · RM = RrU ·
RM When divided by the common factor, it can be expressed as Rβ = RrU. That is, the acoustic
signals to be emitted by the speakers before and after the ear are the same as the acoustic
signals received from (O) before and after the ear. When the acoustic signal subjected to the
above signal processing is listened to using headphones, it is possible to determine front and rear
extracorporeal sound image localization.
[0013]
EXAMPLES Examples of the present invention will be described with reference to FIGS. 1, 2 and
3. FIG. Fig. 1 shows an external sound localization method using headphones, in which speakers
serving as sound sources based on acoustic signals (one front speaker and two rear speakers) are
arranged in the front and back of the ear just before and behind the sound image It is a method
of localization. Fig. 2 is an enlarged cut view of the human head and one side of the headphone
when viewed from the top: 1 front speaker and 2 back speakers placed at the front and back of
the ear and covered with 4 ear pads and 3 covers There is a figure. FIG. 3 shows an example of
the configuration, which comprises, from the left in the figure, a microphone group for sound
collection, a recorded signal recording unit (recording), a recorded signal reproduction unit
(reproduction), and a reproduced signal listening unit. The microphone group, the recording unit,
and the reproduction unit are simple so-called 3D sound recording methods. Out-of-head acoustic
signals separated by the four microphones McFl, McFr, McRl and McRr are collected (recorded
partially) in the Rc extra-head acoustic signal storage unit. The signal is reproduced by the Rp
out-of-head sound reproduction unit (reproduction). The reproduction signal listening section
sets the headphones of FIG. 2 to both ears to listen to the reproduction signal. This is a device
that enables reliable, simple, inexpensive out-of-head sound localization and listening with a
headphone without applying complex signal processing such as cross fade to stored signals of
out-of-head acoustic signals.
[0014]
Industrial Applicability The present invention provides a rich acoustic environment even in a
narrow space such as 3D music appreciation with a DVD or other media that will be increased
from now on, and PC games in a 3D realistic sound field.
10-05-2019
4