JP2008209703

Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2008209703
The present invention provides a karaoke apparatus 1 capable of detecting a microphone
position, tracking a singing position of a singer 6, and emitting a directional beam 6a consisting
of a singing voice toward the singer 6. SOLUTION: A karaoke apparatus 1 emits from a speaker
array 3 a singing voice of a singer collected by a microphone 2 and an accompaniment sound 81.
Furthermore, the karaoke apparatus 1 emits measurement sound 83 from the speakers SP1 and
SPn at both ends of the speaker array 3 simultaneously with or immediately after the generation
of the masker. The karaoke apparatus 1 detects the microphone position from the elapsed time
from the emission of the measurement sound 83 to the collection by the microphone 2. At this
time, the measurement sound 83 is generated as an overtone of the fundamental frequency of
the masker. The karaoke apparatus 1 tracks the microphone position by emitting the periodic
measurement sound 83. That is, by tracking the microphone position, the karaoke apparatus 1
can continue to emit the directional beam 6a made of singing voice to the singer 6. [Selected
figure] Figure 5
Karaoke device
[0001]
The present invention relates to a karaoke apparatus capable of controlling the directivity of
singing voices.
[0002]
The conventional karaoke apparatus is installed in a place occupied by a single group such as
one room of a karaoke box, or is installed in a place where unspecified customers gather, such as
a restaurant such as a snack shop. There are many.
09-05-2019
1
[0003]
Conventional karaoke devices use stereo speakers to spread accompaniment sounds and singing
voices throughout the installation site.
In this case, when installed in a store where the above-mentioned unspecified customers gather,
it is possible for all the singers in the store to hear the singing voice who is singing.
In a restaurant such as a snack, the singing of the other group is not necessarily what you want
to hear, and in some cases it may be annoying. In order to solve this, for example, a directional
speaker directed to the singer is installed, the singing voice is given to the singing person, the
guiding vocal is given to people other than the singing person, the group and the singing person
specified in advance are given In addition to that, he used to hear singing vocals and listened to
guide vocals (see Patent Document 1).
[0004]
In the karaoke apparatus of Patent Document 1, the directivity is given to the group preset by the
user's operation input and the specified singing position (near the monitor of the karaoke
apparatus) to emit the singing voice, and the others I was emitting guidance vocals with
directivity. JP, 2005-173137, A
[0005]
However, in the invention of Patent Document 1, there is a problem that whenever the singer
changes the position while singing, someone in the place has to specify the sound emission
direction of the singing voice.
[0006]
Then, this invention aims at providing the karaoke apparatus which tracks a singer's singing
position, in order to emit a singing voice to a singer.
09-05-2019
2
[0007]
The invention according to claim 1 is a masker from two speakers of a speaker array having a
plurality of speakers and a sound collecting means for collecting voices from the surroundings
including the singing voice of a singer with a microphone and generating a voice signal. Sound
emission means for emitting a measurement sound composed of harmonics of the fundamental
frequency simultaneously with or immediately after the production of the masker, and the
measurement by the sound collection means from the emission of the measurement sound by the
sound emission means Microphone position detecting means for detecting the microphone
position based on an elapsed time until sound collection; and the sound emitting means is
directed to the microphone position detected by the microphone position detecting means.
Emitting a directional beam including an emitted sound to be given to
[0008]
In this configuration, the karaoke apparatus emits measurement sounds from two speakers in the
speaker array simultaneously with or immediately after the masker's sound generation.
The karaoke apparatus detects the position of the microphone from the elapsed time until the
microphone collects the measurement sound emitted from the two speakers in the speaker array.
At this time, the measurement sound is composed of harmonics of the fundamental frequency of
the masker.
In addition, the karaoke apparatus emits a directional beam including singing voice toward the
detected microphone position (singer). Thereby, the karaoke apparatus can know the singing
position of the singer and can emit the directional beam including the singing voice to the singer.
Also, the measurement sound is masked by the masker because it is composed of harmonics of
the fundamental frequency of the masker. For this reason, the karaoke apparatus can emit the
measurement sound without being perceived by a person to know the singing position of the
singer, and can emit the directional beam including the singing voice toward the singer can do.
Further, the karaoke apparatus can track the singer by emitting the measurement sound at
predetermined intervals. Thus, the karaoke apparatus can emit a directional beam including a
singing voice to the singer even if the singer moves.
09-05-2019
3
[0009]
The invention according to claim 2 is characterized in that the sound emitting means emits the
measurement sound included in the data of the karaoke song in advance by using one or a
plurality of instrument sounds constituting an accompaniment sound of the karaoke song as a
masker. Do.
[0010]
In this configuration, one or more musical instrument sounds from among the accompaniment
sounds of karaoke songs are used as maskers, and measurement sounds included in the karaoke
songs are emitted in advance.
As a result, by using an instrumental sound regularly played during karaoke performance as a
masker, it is possible to periodically emit a measurement sound simultaneously with the
sounding of the masker. Further, in the case where there are a plurality of musical instrument
sounds serving as maskers, the number of measurement sounds can be increased, and the
measurement sounds can be emitted more regularly.
[0011]
In the invention of claim 3, the sound emitting means generates and emits the measurement
sound using the musical instrument sound as a masker at each timing of emission of one or a
plurality of musical instrument sounds constituting an accompaniment sound of a karaoke song.
It is characterized by sound.
[0012]
In this configuration, the karaoke apparatus analyzes the accompaniment sound of the karaoke
song, determines the instrument sound to be the masker, and generates and emits the
measurement sound at the timing of the masker's sound emission.
As a result, even if the karaoke song does not include the measurement sound in advance, the
karaoke apparatus can automatically generate the measurement sound and emit it.
09-05-2019
4
[0013]
The invention of claim 4 is characterized in that the sound emitting means detects an increase in
the sound pressure level of the singing voice and generates and emits the measurement sound
using the singing voice as a masker.
[0014]
In this configuration, the karaoke apparatus detects an increase in the sound pressure level of the
singing voice that is a masker, generates a measurement sound, and emits the sound.
As a result, even in the case of a cappella or the like, even if the accompaniment sound is not
included in the karaoke song, it is possible to pronounce the measurement sound.
[0015]
According to the present invention, by configuring the measurement sound with harmonics of
the fundamental frequency of the masker, the karaoke apparatus emits the measurement sound
from the two speakers of the speaker array without being perceived by a person, and the
microphone It can be picked up. Thereby, since the karaoke apparatus can detect the position of
the microphone, it can know the singing position of the singer and can emit the directional beam
including the singing voice toward the singer. Furthermore, by emitting the measurement sound
at any time, the singing position of the singer can be tracked, and even when the singer moves,
the directional beam including the singing voice can be emitted to the singer it can.
[0016]
First Embodiment A karaoke apparatus according to an embodiment of the present invention will
be described with reference to FIGS. FIG. 1 is a view for explaining the inside of a restaurant. FIG.
1A shows a singer singing in front of the monitor. FIG. 1 (B) shows a singer singing in front of his
/ her group. FIG. 2 is an explanatory view of a microphone position detection method.
[0017]
09-05-2019
5
As shown to FIG. 1 (A), the karaoke apparatus 1 is installed in the inside 5 of the restaurant. The
karaoke apparatus 1 has a microphone 2, a speaker array 3 and a monitor 4. Furthermore, tables
7 (7a to 7d) are disposed in the store 5, and customers are seated on the respective tables 7a to
7d. In addition, the singer 6 who is a customer of the table 7 a sings using the karaoke apparatus
1. In addition, in the present embodiment, the singing voice of the singer 6 is played on the self
and the table 7a in which the user is seated, and the other tables 7b to 7d are not played on the
singing voice in order to simplify the explanation. Describe how to listen to the guide vocals.
[0018]
When the singer 6 sings, the karaoke apparatus 1 generates the directional beam 70a including
the singing voice, emits the sound toward the table 7a in which the group of the singer 6 is
seated, and the position of the singer 6 Detect and generate a directional beam 6 a including
singing voice and emit it to the singer 6. As shown in FIG. 1 (B), when the singer 6 moves, the
karaoke apparatus 1 tracks the position of the singer 6, generates a directional beam 6a
including the singing voice, and releases it to the singer 6 Sound. In addition, the karaoke
apparatus 1 generates directional beams 70b to 70d including guide vocals and emits the beams
to the other tables 7b to 7d. At this time, the karaoke apparatus 1 receives an operation input of
the singer 6 and causes the user to specify a table 7a for emitting a singing voice.
[0019]
In the present invention, the karaoke apparatus 1 emits the measurement sound, which is
previously included in the karaoke song, from the speakers at both ends of the speaker array 3
and collects it by the microphone 2. The karaoke apparatus 1 measures the time from the
emission of the measurement sound to the sound collection, and detects the position of the
microphone 2 using trigonometry. The karaoke apparatus 1 periodically emits a measurement
sound to track the microphone 2 and directs the microphone 2 to emit a directional beam 6 a
including singing voice. Furthermore, the measurement sound is composed of harmonics of the
fundamental frequency of the masker, with the instrument sound included in the accompaniment
sound of the karaoke song as the masker. The karaoke apparatus 1 can emit the measurement
sound while simultaneously masking it or masking it over time by producing the measurement
sound simultaneously with or immediately after the production of the masker. Thereby, in the
present invention, the measurement sound can be emitted without being perceived by a person,
and the position of the microphone 2 can be detected. Therefore, the directional beam 6a
including the singing voice while tracking the singer 6 Can be emitted to the singer 6. In the
09-05-2019
6
present invention, a masker is a sound that hides the pronunciation of a measurement sound.
[0020]
Below, the detection method of a microphone position is demonstrated with reference to FIG. As
shown in FIG. 2, the karaoke apparatus 1 emits measurement sound 83 from the speakers SP1
and SPn at both ends of the speaker array 3 (speakers SP1 to SPn). The measurement sound 83
is collected by the microphone 2 when emitted from the speaker SP1 and the speaker SPn. Here,
an elapsed time until the measurement sound 83 emitted from the speaker SP1 is collected by
the microphone 2 is Ta, and an elapsed time until the measurement sound 83 emitted from the
speaker SPn is collected by the microphone T2; The distance from SP1 is La, and the distance
from the speaker SPn is Lb. From the elapsed time (Ta <Tb) from the speakers SP1 and SPn, the
distance (La <Lb) from the speakers SP1 and SPn to the microphone 2 is determined. For this, the
position of the microphone 2 is calculated using trigonometry (see (A)). When the elapsed time is
TaTaTb (see (B)) and when the elapsed time is Ta> Tb (see (C)), the position of the microphone 2
is calculated in the same manner. Thereby, the karaoke apparatus 1 detects the position of the
microphone 2 by periodically emitting the measurement sound 83 from the speakers SP1 and
SPn, tracks the position of the microphone 2, and emits the singing voice. Can.
[0021]
Next, the measurement sound 83 emitted from the speakers SP1 and SPn will be described with
reference to FIGS. FIG. 3 is an explanatory view of selecting a masker. FIG. 3A shows an example
suitable for a masker. FIG. 3B shows an example not suitable for a masker. FIG. 4 is an
explanatory view of the addition of the measurement sound.
[0022]
The measurement sound 83 is generated by using the harmonics of the fundamental frequency
of the masker, with the instrument sound included in the accompaniment sound of the karaoke
song as the masker. The measurement sound 83 is simultaneously masked or masked over time
by being pronounced simultaneously with the pronunciation of the masker or being pronounced
immediately after the pronunciation of the masker. In addition, the sound pressure level of the
measurement sound 83 is changed according to the type and level of the instrument sound. For
example, when the sound pressure of the instrument sound rises, the sound pressure of the
09-05-2019
7
measurement sound 83 is increased, and when the sound pressure of the instrument sound falls,
the sound pressure of the measurement sound 83 is decreased. As a result, the singer 6 and the
customers in the store 5 can enjoy karaoke without perceiving the measurement sound 83.
[0023]
As shown in FIG. 3A, an instrument sound suitable for a masker is an instrument sound having a
sound component from the low band to the high band. For example, it is an instrument sound
such as a harp sea chord, a glocken, a xylophone, or an instrument sound whose waveform
becomes a sawtooth wave. Further, as shown in FIG. 3B, an instrument sound not suitable for a
masker is an instrument sound having a sound component only in the low frequency range and
having no sound component in the high frequency range. For example, it is an instrument sound
such as an organ or a horn.
[0024]
Here, in general, a frequency band in which human hearing can be perceived is about 20 Hz to
20 kHz, and a frequency band of 15 kHz or more may or may not be heard by a person.
Therefore, as shown in FIG. 4, when using a musical instrument with a scale as a masker, the
measurement sound 83 is generated in a frequency band (15 kHz or more) that is an overtone of
the fundamental frequency of the musical instrument sound to be a masker. Do. When a musical
instrument without a scale is used as a masker, the measurement sound 83 is generated in a
band where there are frequency components of the musical instrument sound to be a masker and
in a band (15 kHz or more) which is difficult for people to hear. As a result, the measurement
sound 83 is a sound that is masked by the masker and in a frequency band that is difficult for
people to hear, so it is not perceived by the singer 6 or the customer inside the store 5.
[0025]
Specifically, for example, the measurement sound 83 is included in advance in a frequency band
(15 kHz or less) which the accompaniment sound 81 is hard to hear, and is emitted from the
karaoke apparatus 1 together with the accompaniment sound 81. When the accompaniment
sound 81 is emitted from the speaker array 3, the karaoke apparatus 1 causes the low-pass filter
to pass the accompaniment sound 81 and the measurement sound 83. The karaoke apparatus 1
emits only the accompaniment sound 81 from which the measurement sound 83 has been
09-05-2019
8
removed from the speaker array 3 by cutting the frequency band (15 kHz or so) which is difficult
for people to hear. Next, the karaoke apparatus 1 causes the accompaniment sound 81 and the
measurement sound 83 to pass through the band pass filter that acquires the band in which the
measurement sound 83 exists from the band (15 kHz or more) removed by the low pass filter. To
be emitted from the speakers SP1 and SPn at both ends. Thereby, the accompaniment sound 81
can be emitted from each of the speakers SP1 to SPn of the speaker array 3, and the speakers
SP1 and SPn at both ends can emit the measurement sound 83 together with the accompaniment
sound 81.
[0026]
Note that the measurement sound 83 does not necessarily have to be generated in a frequency
band (15 kHz or less) which is difficult for people to hear, and when the musical instrument to be
a masker has a scale, it may be generated at a harmonic of the fundamental frequency of the
masker. If the musical instrument to be a masker does not have a scale, it may be generated in a
band having the frequency component of the masker's sound. In this case, the karaoke apparatus
1 only needs to be able to detect the measurement sound 83 from the accompaniment sound 81
and the singing voice collected by the microphone 2 or the like.
[0027]
Next, the function of the karaoke apparatus 1 will be described with reference to FIG. FIG. 5 is a
functional block diagram of the karaoke apparatus. The karaoke apparatus 1 includes an
operation unit 100, a control unit 10, a storage unit 8, a MIDI sound source 91, a guide vocal
reproduction unit 92, a microphone 2, a speaker array 3 (speakers SP1 to SPn), A / D converters
11 and 16, beam forming Sections 13 and 18, low pass filters 12 and 17, band pass filters 14
(14a to 14d), 19 (19a to 19d), microphone position detection unit 15, mixer 20, D / A converter
21 (21-1 to 21-n And AMP 22 (22-1 to 22-n). Hereinafter, in order to simplify the description,
the sound collection range of the microphone 2 used in the present embodiment is set to 20 kHz
or less, and the measurement sound 83 is described below as generated in a frequency band of
15 kHz to 20 kHz.
[0028]
The operation unit 100 receives an operation input from the singer 6 or the like, and outputs the
09-05-2019
9
operation input content to the control unit 10. For example, the operation unit 100 receives
various settings such as selection of a karaoke song, designation of the table 7a for emitting the
singing voice of the singer 6, and emission / non emission of the guide melody 82. Further, in
order to simplify the description, it is assumed that the guide melody 82 is set not to emit a
sound.
[0029]
The control unit 10 receives the operation input of the operation unit 100 and controls each
functional unit of the karaoke apparatus 1 described below. The control method of each
functional unit will be described later.
[0030]
The storage unit 8 stores a plurality of karaoke songs, and stores, for each karaoke song, data of
an accompaniment sound 81, data of a guide melody 82, data of a measurement sound 83, and
data of a guide vocal 84.
[0031]
The MIDI sound source 91 sequentially acquires the data of the accompaniment sound 81, the
data of the guide melody 82, and the data of the measurement sound 83 from the storage unit 8
according to an instruction of the control unit 10, and outputs the data to the A / D converter 11.
The accompaniment sound 81 is composed of various musical instrument sounds. The guide
melody 82 is the main melody of the accompaniment sound 81, and is for supporting the singing
of the singer 6. The measurement sound 83 is generated from the accompaniment sound 81 at a
harmonic of the fundamental frequency of the masker, using one or more instrument sounds as
the masker. At this time, an instrument sound to be a masker is appropriately selected according
to the karaoke song. In addition, the measurement sound 83 is emitted periodically (for example,
every measure, etc.). Furthermore, the measurement sound 83 is emitted from the speakers SP1
and SPn at both ends of the speaker array 3. At this time, the measurement sound 83 is
generated in different frequency bands for each of the speakers SP1 and SPn, and is emitted from
the speakers SP1 and SPn. As a result, the karaoke apparatus 1 can determine which of the
speakers SP1 and SPn the measurement sound 83 collected by the microphone 2 is emitted.
09-05-2019
10
[0032]
Further, even if the karaoke apparatus 1 emits the measurement sound 83 simultaneously or
separately from the speakers SP1 and SPn at both ends of the speaker array 3, the position of the
microphone 2 can be detected. At this time, when the measurement sound 83 is emitted
separately from the speaker SP1 and the speaker SPn, the measurement sound 83 of the same
frequency may be emitted. When the measurement sound 83 is emitted simultaneously from the
speaker SP1 and the speaker SP2, it is necessary to change the frequency. Furthermore, when
the singer is moving continuously, the position of the microphone 2 is more pronounced when
the measurement sound 83 is emitted simultaneously from the speakers SP1 and SPn at both
ends of the speaker array 3 than when it is emitted separately. It can be detected accurately.
[0033]
The guide vocal reproducing unit 92 sequentially acquires data of the guide vocal 84 from the
storage unit 8 according to an instruction of the control unit 10 and outputs the data to the A / D
converter 11. The guide vocal 84 is composed of a singing voice as a model and is for supporting
the singing of the singer 6.
[0034]
The A / D converter 11 converts these data input from the MIDI sound source 91 and the guide
vocal reproduction unit 92 from analog format to digital format to generate an audio signal.
[0035]
The low pass filter 12 passes only the frequency band (-15 kHz) in which the audio signal of the
measurement sound 83 does not exist from the audio signal input to the A / D converter 11 and
inputs it to the beam forming unit 13 described later.
Also, the band pass filter 14 (14a to 14d) is a frequency component of a band in which the audio
signal of the measurement sound 83 exists from the audio signal input to the A / D converter 11
(a part of frequency components of 15 to 20 kHz) , And input to the microphone position
09-05-2019
11
detection unit 15 described later. At this time, the band pass filters 14a to 14d take out different
frequency components.
[0036]
The microphone 2 picks up the singing voice of the singer 6 and also picks up the emitted voice
emitted from the speakers SP1 to SPn. The microphone 2 together with the singing voice of the
singer 6 who picked up the sound emission voice (accompaniment sound 81, measurement
sound 83, guide vocal 84, etc.) from the speaker array 3 with the A / D converter 16, the
respective filters 17, 19 ( The beam forming unit 18 and a microphone position detection unit 15
described later are input through 19a to 19d). At this time, the singing voice, the emitted voice
from the speaker array 3 (hereinafter, the singing voice and the emitted voice from the speaker
array 3 will be referred to as collected voice. ) Is A / D converted by the A / D converter 16 and
generated as a collected voice signal. Further, the low pass filter 17 passes only the low band
portion (˜15 kHz) of the collected voice signal not including the measurement sound 83 and
inputs the low pass portion to the beam forming unit 18. At this time, the low pass filter 17
extracts the frequency component corresponding to the low pass filter 12 from the collected
voice signal. Further, the band pass filter 19 (19a to 19d) passes only the frequency component
(a part of frequency components of 15 to 20 kHz) of the collected voice signal including the
measurement sound 83, and inputs it to the microphone position detection unit 15 described
later. Do. At this time, the band pass filters 19 a to 19 d extract frequency components
corresponding to the band pass filters 14 a to 14 d from the collected sound signal.
[0037]
The beam forming units 13 and 18 have directivity from the speaker array 3 to emit directional
beams 6 a and 70 a to 70 d and do not have directivity from the speakers SP 1 and SP n at both
ends of the speaker array 3. In order to emit the accompaniment sound 81, an emitted sound
signal corresponding to each of the speakers SP1 to SPn is formed. Specifically, the beam
forming unit 13 generates each speaker SP1 forming the speaker array 3 from the audio signal
of the accompaniment sound 81 filtered by the low pass filter 12 and the audio signal of the
guide vocal 84 according to the instruction of the control unit 10. The sound emission audio ¦
voice signal corresponding to each of-SPn is formed, and a sound emission audio ¦ voice signal is
output to the mixer 20. FIG. Further, the beam forming unit 18 forms a sound emission sound
signal corresponding to each of the speakers SP1 to SPn constituting the speaker array 3 from
the collected sound signal from which the measurement sound 83 has been removed by the low
pass filter 17. Output to 20. At this time, when the beam forming coefficient is input from the
09-05-2019
12
microphone position detecting unit 15 described later, the beam forming units 13 and 18
determine the sound emission direction of the directional beam 6 a based on the beam forming
coefficient, and respond to it. Sound emission sound signals of the speakers SP1 to SPn are
formed and output to the mixer 20.
[0038]
The mixer 20 mixes the emitted sound signals (accompaniment sound 81, guide vocal 84,
collected sound) input from the beam forming units 13 and 18. Specifically, the mixer 20 adds
the measurement sound 83 input from the band pass filter 14 to the emitted sound signal input
to the speakers SP1 and SPn at both ends. At this time, the mixer 20 adds the emitted sound
signal of the singing voice and the emitted sound signal of the accompaniment sound 81 etc. to
the directional beams 6a and 70a with respect to the table 7a on which the group of the singers
6 and 6 sit. Generate. The directional beams 70b to 70d for the other tables 7b to 7d are
generated by adding the emitted sound signal of the guide vocal 84 and the emitted sound signal
of the accompaniment sound 81 or the like. The mixer 20 inputs the sound emission sound
signal to the speakers SP1 to SPn through the D / A converters 21 (21-1 to 21-n) and the AMPs
22 (22-1 to 22-n). Here, the D / A converter 21 and the AMP 22 perform D / A conversion,
amplification, and the like on the emitted sound signal, and the speakers SP1 to SPn emit the
directional beams 6a and 70a to 70d.
[0039]
The microphone position detection unit 15 includes level detection units 151 (151a to 151d),
153 (153a to 153d), timer units 152 (152a to 152d), a microphone position calculation unit
154, and a beam formation coefficient calculation unit 155. The microphone position detection
unit 15 calculates a beam formation coefficient that determines the sound emission direction of
the directional beam 6 a emitted to the singer 6.
[0040]
Specifically, when the level detection unit 151 detects the audio signal of the measurement
sound 83 included in the audio signal input through the band pass filter 14, the level detection
unit 151 instructs the timer unit 152 to start the timer. When the level detection unit 153
detects the audio signal of the measurement sound 83 included in the collected sound signal
09-05-2019
13
input through the band pass filter 19, the level detection unit 153 instructs the timer unit 152 to
end the timer. The timer unit 152 counts the time from the reception of the timer activation
instruction to the reception of the termination instruction, and outputs this time information to
the microphone position calculation unit 154.
[0041]
At this time, since the same frequency components are extracted from each of the band pass
filters 14a to 14d and each of the band pass filters 19a to 19d, the measurement sound 83
(detected by the level detection unit 151) emitted from the speakers SP1 and SPn And the
measurement sound 83 (detected by the level detection unit 153) collected by the microphone 2
can be detected in association with each other. For this reason, the timer unit 152 obtains the
time from the start to the end of the timer, and the time from when the measurement sound 83 is
emitted from the speakers SP1 and SPn to when the measurement sound 83 is collected by the
microphone 2 ( Hereinafter, it is referred to as elapsed time. ) Can be asked.
[0042]
The speakers SP1 and SPn pass band-pass filters 14a to 14d for extracting different frequency
components, and emit the measurement sound 83. Thus, it can be determined which of the
speakers SP1 and SPn corresponds to the measurement sound 83 input to the level detection
units 151 and 153. Therefore, the timer unit 152 can obtain the elapsed time for each of the
speakers SP1 and SPn.
[0043]
The microphone position calculation unit 154 calculates the position of the microphone 2 based
on the elapsed time for each of the speakers SP1 and SPn. Based on the position of the
microphone 2 calculated by the microphone position calculation unit 154, the beam formation
coefficient calculation unit 155 calculates a beam formation coefficient in which directivity is
given in the direction of the position of the microphone 2. The beam forming coefficients are
output to the beam forming units 13 and 18.
[0044]
09-05-2019
14
Next, the flow of processing at the time of generation of the directional beam 6a directed to the
singer 6 will be described with reference to FIG. FIG. 6 is a flow chart showing the procedure of
generating a directional beam when the karaoke song includes a measurement sound. In order to
simplify the description, only the method of generating the directional beam 6a for the singer 6
will be described except for the method for generating the directional beams 70a to 70d for the
respective tables 7a to 7d.
[0045]
First, the flow of processing at the time of karaoke performance will be described. As shown in
FIG. 6, in step S101, the MIDI sound source 91 reads out the data of the accompaniment sound
81, the data of the guide melody 82, and the data of the measurement sound 83 from the storage
unit 8 according to the instruction of the control unit 10. Output to A / D converter 11. At this
time, each data is A / D converted, an audio signal corresponding to each is generated, and the
process proceeds to step S102.
[0046]
In step S 102, the audio signal of the accompaniment sound 81, the audio signal of the guide
melody 82, and the audio signal of the measurement sound 83 are output to the low pass filter
12. At this time, the audio signal of the measurement sound 83 is removed from these audio
signals. The audio signal of the accompaniment sound 81 that has passed through the low pass
filter 12 and the audio signal of the guide melody 82 are output to the beam forming unit 13
(S103), and the process proceeds to step S104.
[0047]
In step S104, it is checked whether the beam forming coefficient has been input to the beam
forming unit 13. When the beam forming coefficient is input (during singing) (S104: Yes), the
process proceeds to step S106.
[0048]
09-05-2019
15
When the beam forming coefficient is not input (at the start of singing) (S104: No), the beam
forming unit 13 causes the accompaniment to emit the directional beam 6a toward the monitor 4
according to the instruction of the control unit 10. An emitted sound signal is generated from the
audio signal of the sound 81 (S105). As described above, when the singer 6 starts singing,
detection of the singing position is not started, so the beam forming coefficient is not input.
Therefore, the beam forming unit 13 generates an emitted sound signal so as to emit the
directional beam 6 a toward the monitor 4. Further, the beam forming unit 13 may generate the
emitted sound signal only when the beam forming coefficient is input.
[0049]
In step S106, the beam forming unit 13 performs directivity control on the basis of the beam
forming coefficient according to an instruction from the control unit 10, generates an emitted
sound signal from the audio signal of the accompaniment sound 81, and proceeds to step S107. .
[0050]
In step S107, the beam forming unit 13 outputs the generated emitted sound signal to the mixer
20, and the process proceeds to step S108.
[0051]
In step S108, the audio signal of the accompaniment sound 81, the audio signal of the guide
melody 82, and the audio signal of the measurement sound 83 are output to the band pass filter
14.
The band pass filter 14 passes only the audio signal of the measurement sound 83 from these
audio signals, and outputs the audio signal to the level detection unit 151.
Then, when the audio signal of the measurement sound 83 is detected by the level detection unit
151 (S109: Yes), the timer unit 152 starts the timer (S110), and the process proceeds to step
S111.
[0052]
09-05-2019
16
In step S111, the audio signal of the measurement sound 83 output from the band pass filter 14
is added to the emitted sound signal in the mixer 20. At this time, the measurement sound 83 is
added to the emitted sound signal so that the speakers SP1, SPn at both ends of the speaker
array 3 are emitted.
[0053]
In step S112, these emitted sound signals are emitted from the speakers SP1 to SPn via the
corresponding D / A converter 21 and AMP 22, and the process proceeds to step S113. This
emitted sound signal becomes a directional beam 6 a and emitted to the singer 6.
[0054]
In step S113, the microphone 2 calls the singing voice of the singer 6 and the emitted voice
emitted from the speaker array 3 (hereinafter referred to as collected voice. Pick up). These
collected voices are input to the A / D converter 11, and the process proceeds to step S114. At
this time, the collected voice is A / D converted and generated as a collected voice signal.
[0055]
In step S114, the collected voice signal is output to the low pass filter 17. At this time, the audio
signal of the measurement sound 83 (included in the emitted sound emitted from the speaker
array 3) is removed from the collected sound signal. The collected voice signal that has passed
through the low pass filter 17 is output to the beam forming unit 18 (S115), and the process
proceeds to step S116.
[0056]
In step S116, it is checked whether the beam forming coefficient has been input to the beam
forming unit. If the beam forming coefficient is input (during singing) (S116: Yes), the process
proceeds to step S118.
09-05-2019
17
[0057]
When the beam forming coefficient is not input (at the start of singing) (S116: No), the beam
forming unit 18 receives the directional beam 6a so as to emit the directional beam 6a toward
the monitor 4 according to the instruction of the control unit 10. A sound emission sound signal
is generated from the sound sound signal (S117). As described above, when the singer 6 starts
singing, detection of the singing position is not started, so the beam forming coefficient is not
input. Therefore, the beam forming unit 18 generates an emitted sound signal so as to emit the
directional beam 6 a toward the monitor 4. Also, the beam forming unit 18 may generate the
emitted sound signal only when the beam forming coefficient is input.
[0058]
In step S118, the beam forming unit 18 performs directivity control based on the beam forming
coefficient according to an instruction from the control unit 10, generates an emitted sound
signal from the collected sound signal, and proceeds to step S119.
[0059]
In step S119, the beam forming unit 18 outputs the emitted sound signal to the mixer 20, and
the process proceeds to step S120.
[0060]
In step S120, the collected voice signal is output to the band pass filter 19.
The band pass filter 19 acquires an audio signal of the measurement sound 83 from the collected
sound signal and outputs the audio signal to the level detection unit 153.
Then, when the audio signal of the measurement sound 83 is detected by the level detection unit
153 (S121: Yes), the timer unit 152 stops the timer (S122), and the process proceeds to step
S123.
09-05-2019
18
[0061]
In step S123, the microphone position calculation unit 154 calculates the position of the
microphone 2 based on the measurement time from the start to the stop of the timer, and the
process proceeds to step S124.
[0062]
In step S124, the beam formation coefficient calculation unit 155 calculates the beam formation
coefficient so that the directional beam 6a is emitted from the speaker array 3 at the position of
the microphone 2.
The karaoke apparatus 1 inputs the calculated beam forming coefficients to the beam forming
units 13 and 18, and returns to step S101.
[0063]
The karaoke apparatus 1 repeatedly performs the processing of steps S101 to S124 described
above, and emits the accompaniment sound 81, the measurement sound 83, and the collected
sound collected by the microphone 2 from the speaker array 3 until the karaoke music is
finished. Do.
[0064]
As described above, the karaoke apparatus 1 according to the first embodiment gives directivity
from the speaker array 3 and emits singing voice, guide vocal, and accompaniment sound 81,
and from the speakers SP1 and SPn at both ends of the speaker array 3 The accompaniment
sound 81 and the measurement sound 83 can be emitted.
The karaoke apparatus 1 can detect the microphone position, that is, the position of the singer 6
by obtaining the elapsed time until the measurement sound 83 is collected by the microphone 2,
and the directivity beam 6a is transmitted to the singer 6 It can emit noise. In addition, the
measurement sound 83 is composed of an instrumental sound as a masker and is formed of
harmonics of the fundamental frequency of the masker, and is pronounced at the timing of their
generation. As a result, the singer 6 and the customers in the store 5 can enjoy karaoke without
perceiving the measurement sound 83. Furthermore, since the measurement sound 83 is
09-05-2019
19
generated using a frequency band that is hard to be perceived by human beings, the singer 6 and
the customer inside the store 5 do not perceive the measurement sound 83 more.
[0065]
Second Embodiment Next, a second embodiment of the present invention will be described with
reference to FIGS. The karaoke apparatus 1 according to the second embodiment of the present
invention differs from the first embodiment in that the data of the measurement sound 83 is not
included in the karaoke song. Therefore, the karaoke apparatus 1 analyzes the data of the
accompaniment sound 81 and determines an instrument sound to be a masker. The karaoke
apparatus 1 generates and sounds a measurement sound 83 at the timing of sound generation of
an instrument sound to be a masker. At this time, the measurement sound 83 is generated as an
overtone of the fundamental frequency of the masker, using the musical instrument sound (for
example, a harp chord) selected from the accompaniment sound 81 as a masker. FIG. 7 is a
functional block diagram of the karaoke apparatus. FIG. 8 is a flowchart showing a procedure of
generating a directional beam in the case of generating a measurement sound based on an
accompaniment sound.
[0066]
As shown in FIG. 7, in the karaoke apparatus 1 of the second embodiment, the measurement
sound 83 is not included in the karaoke song. The karaoke apparatus 1 further includes a MIDI
signal analysis unit 23, a measurement sound MIDI signal generation unit 24, and a MIDI signal
merge unit 25. These functional units are described below.
[0067]
The MIDI signal analysis unit 23 analyzes the MIDI data of the accompaniment sound 81, and
determines an instrument sound to be a masker to be a harpsichoy code. The MIDI signal
analysis unit 23 instructs the measurement sound MIDI signal generation unit 24 to generate the
measurement sound 83 at the same timing as the masker at the harmonic of the fundamental
frequency of the masker. Specifically, by reading the value of velocity, the value of volume and
the value of expression among the notes of the harpsichord chord detected from the MIDI data of
the accompaniment sound 81 within a predetermined time (for example, one measure), the
sound is The note with the highest pressure level is detected and selected as a masker. The note
09-05-2019
20
number value and the pitch bend value are determined so that the MIDI data of the measurement
sound 83 is an integral multiple of the masker frequency (calculated from the masker note
number value and the pitch bend value etc.) The note-on value is determined by the same value
as the note-on value, and the velocity, volume, and expression values are properly determined
based on the masker's velocity, volume, and expression values. The measurement sound MIDI
signal generation unit 24 receives these values from the MIDI signal analysis unit 23, and
generates MIDI data of the measurement sound 83 based on the values.
[0068]
Also, the MIDI signal analysis unit 23 generates a measurement sound 83 based on one or more
fundamental frequencies. When the measurement sound 83 is generated based on a plurality of
fundamental frequencies, even if the emission of the specific fundamental frequency is
interrupted, the measurement sound 83 is periodically generated based on the emission of other
fundamental frequencies. The measurement sound 83 can be emitted. Moreover, since the
musical instrument sound used as a masker is not limited to one, if it is a musical instrument
sound suitable for a masker, you may use a plurality of musical instrument sounds (harp sea
chord, Glocken). As a result, even if the sound emission of one instrument sound is interrupted,
the measurement sound 83 can be periodically emitted by generating the measurement sound
83 based on the sound emission of the other instrument sound. When a musical instrument
without a scale is used as a masker, the measurement sound 83 is generated in a band having the
frequency component of the masker's sound.
[0069]
The measurement sound 83 may be emitted simultaneously from the speakers SP1 and SPn at
both ends of the speaker array 3 or separately. When the measurement sound 83 is
simultaneously emitted from the speakers SP1 and SPn at both ends of the speaker array 3, the
measurement sound 83 is generated at different frequencies for each of the speakers SP1 and
SPn.
[0070]
The measurement sound MIDI signal generation unit 24 receives an instruction from the MIDI
signal analysis unit 23, generates a MIDI signal of the measurement sound 83, and outputs the
09-05-2019
21
MIDI signal to the MIDI signal merge unit 25. Specifically, according to the instruction of the
MIDI signal analysis unit 23, a MIDI signal of the measurement sound 83 is generated so as to be
sounded at the same timing as the harpsichord code and to be a harmonic of the fundamental
frequency of the harpsichord code. Output to the merging unit 25.
[0071]
Note that both an accompaniment sound 81 and a guide melody 82 can be used as a masker.
However, since the accompaniment sound 81 has a higher sound pressure level than the guide
melody 82, it is more suitable to use the accompaniment sound 81 having a higher sound
pressure level than the guide melody 82 for the masker.
[0072]
The MIDI signal merging unit 25 adds the MIDI data of the measurement sound 83 to the MIDI
data of the accompaniment sound 81 and outputs the result to the MIDI sound source 91. The
MIDI sound source 91 converts the MIDI data of the accompaniment sound 81 merged by the
MIDI signal merging unit 25 and the MIDI data of the measurement sound 83 into audio signals
and outputs them. As described above, the accompaniment sound 81 is analyzed to generate a
measurement sound 83.
[0073]
Note that the measurement sound 83 does not necessarily have to be generated in a frequency
band (15 kHz or less) which is difficult for people to hear, and when the musical instrument to be
a masker has a scale, it may be generated at a harmonic of the fundamental frequency of the
masker. If the musical instrument to be a masker does not have a scale, it may be generated in a
band having the frequency component of the masker's sound. In this case, the karaoke apparatus
1 only needs to be able to detect the measurement sound 83 from the accompaniment sound 81
and the singing voice collected by the microphone 2 or the like.
[0074]
09-05-2019
22
Further, as shown in FIG. 8, in the second embodiment, the process flow of steps S201 to S206 is
added to the process of the first embodiment in the flow of the process at the time of generation
of the directional beam directed to the singer 6 . Hereinafter, only the process of the added steps
S201 to S206 will be described.
[0075]
In step S201, the control unit 10 inputs the MIDI data of the accompaniment sound 81 and the
MIDI data of the guide melody 82 from the storage unit 8 to the MIDI signal analysis unit 23.
The MIDI signal analysis unit 23 analyzes the MIDI data of the accompaniment sound 81,
determines from the accompaniment sound 81 a musical instrument sound (harp sea chord) to
be a masker (S202), and proceeds to step S203.
[0076]
In step S203, the MIDI signal analysis unit 23 checks whether the sound pressure level of the
fundamental frequency of the harpsichoy code has risen sharply within a fixed time (for example,
one measure). If a sharp rise of the sound pressure level of the fundamental frequency of the
harpsichord code is detected (S203: Yes), the process proceeds to step S204.
[0077]
In step S204, the measurement sound MIDI signal generation unit 24 generates MIDI data of the
measurement sound 83 at a harmonic of the fundamental frequency at which the sound pressure
level has rapidly increased, and the process proceeds to step S205. At this time, the sound
pressure level of the measurement sound 83 is determined based on the sound pressure level of
the fundamental frequency at which the sound pressure level has rapidly increased.
[0078]
In step S205, the MIDI signal merging unit 25 adds the MIDI data of the measurement sound 83
to the MIDI data of the accompaniment sound 81, and proceeds to step S206. At this time, the
MIDI signal merging unit 25 adds the measurement sound 83 so as to be simultaneously masked
09-05-2019
23
by the fundamental frequency whose sound pressure level has rapidly increased.
[0079]
In step S206, the MIDI signal analysis unit 23 repeatedly performs the processing of steps S201
to S205 until the analysis of the accompaniment sound 81 is completed (S206: No). At this time,
the end of the analysis of the accompaniment sound 81 can be known from the fact that the
accompaniment sound 81 is not input to the MIDI signal analysis unit 23. When the analysis of
the accompaniment sound 81 is completed (S206: Yes), the process proceeds to step S207. The
processes after step S207 are the same as the processes after step S101 in the first embodiment.
[0080]
In the second embodiment, the accompaniment sound 81 is analyzed, and after the addition of
the measurement sound 83 to the accompaniment sound 81 is completed, the emission of the
accompaniment sound 81 or the like is started. However, the present invention is not limited to
this, and as shown in FIG. 9, the accompaniment sound 81 may be analyzed, and while the
measurement sound 83 is added to the accompaniment sound 81, the emission of the
accompaniment sound 81 etc. may be started. FIG. 9 is a flow in which the process of S207 is
immediately performed without performing the process of S206 of FIG.
[0081]
Further, in the second embodiment, the masker is a harpsichord code, but the present invention
is not limited to this, and any other instrument sound may be used as long as it is an instrument
sound suitable for the masker. In the case where a plurality of musical instrument sounds are
used as the masker, the harp sea chord and the glocken are used, but the present invention is not
limited to this, and a plurality of musical instrument sounds suitable for the masker may be used.
[0082]
As described above, in the karaoke apparatus 1 according to the second embodiment, the
09-05-2019
24
measurement sound 83 can be generated and pronounced by analyzing the accompaniment
sound 81 even if the measurement sound 83 is not included in the karaoke song. As a result, as
in the first embodiment, the microphone position can be detected, and the directional beam 6a
can be emitted to the singer 6.
[0083]
In the first and second embodiments, the directional beam 6a, 70a consisting of the
accompaniment sound 81 and the singing voice is emitted from the speaker array 3 toward the
table 7a on which the singer 6 and the group of the singer 6 are seated. The directional beams
70b to 70d composed of the accompaniment sound 81 and the guide vocal 84 are emitted
toward the other tables 7b to 7d. Further, the present invention has been described on the
assumption that the measurement sound 83 and the accompaniment sound 81 are emitted from
the speakers SP1 and SPn at both ends of the speaker array 3 without directivity. However, the
present invention is not limited to this, and directional beams 6a and 70a consisting of singing
voice are emitted from the speaker array 3 toward the table 7a on which the group of the singer
6 and the singer 6 is seated, and the other tables 7b to 7d The directional beams 70b to 70d of
the guide vocals 84 may be emitted toward the target. Furthermore, the accompaniment sound
81 and the measurement sound 83 may be emitted from the speakers SP1 and SPn at both ends
of the speaker array 3 without directivity. Further, from the speaker array 3, directional beams
6a and 70a consisting of an accompaniment sound 81, a singing voice and a measuring sound 83
are emitted toward the table 7a where the singer 6 and the group of the singer 6 are seated.
Directional beams 70b to 70d including the accompaniment sound 81, the guide vocal 84, and
the measurement sound 83 may be emitted toward the tables 7b to 7d. That is, in the first and
second embodiments, the measurement sound 83 may be emitted from the speakers SP1 and
SPn at both ends of the speaker array 3 together with the accompaniment sound 81 serving as a
masker.
[0084]
Third Embodiment Next, a third embodiment of the present invention will be described with
reference to FIGS. In the karaoke machine 1 according to the third embodiment of the present
invention, the data of the accompaniment sound 81, the data of the guide melody 82, and the
data of the measurement sound 83 are not included in the MIDI sound source 91 (for example, a
cappella song etc.) It differs from the embodiment. Therefore, the karaoke apparatus 1 analyzes
the singing voice of the singer 6 and generates and sounds the measurement sound 83 at the
timing when the sound pressure level of the singing voice rises. FIG. 10 is a functional block
09-05-2019
25
diagram of the karaoke apparatus. FIG. 11 is a flowchart showing a procedure of generating a
directional beam in the case of generating a measurement sound based on a singing voice.
[0085]
As shown in FIG. 10, in the third embodiment, the data of the accompaniment sound 81, the data
of the guide melody 82, and the data of the measurement sound 83 are not included in the
karaoke song. The karaoke apparatus 1 further includes an audio signal analysis unit 26, a
measurement sound generation unit 27, and a signal merging unit 28. These functional units are
described below.
[0086]
The voice signal analysis unit 26 analyzes the singing voice of the singer 6 and instructs the
measurement sound generation unit 27 to generate the measurement sound 83 at the generation
timing of the measurement sound 83. Specifically, for example, when a sudden rise in the sound
pressure level of the voice signal of the singing voice is detected, the voice signal analysis unit 26
instructs the measurement sound generation unit 27 to generate the measurement sound 83.
The audio signal analysis unit 26 detects an abrupt increase in the sound pressure level of the
audio signal of the singing voice for each measure, and instructs to periodically generate the
measurement sound 83. At this time, the level of the measurement sound 83 is determined
according to the sound pressure level of the voice signal of the singing voice. The measurement
sound 83 may be emitted simultaneously from the speakers SP1 and SPn at both ends of the
speaker array 3 or separately. When the measurement sound 83 is simultaneously emitted from
the speakers SP1 and SPn at both ends of the speaker array 3, the measurement sound 83 is
generated at different frequencies for each of the speakers SP1 and SPn.
[0087]
The measurement sound generation unit 27 receives an instruction from the audio signal
analysis unit 26, generates an audio signal of the measurement sound 83, and outputs the audio
signal to the signal merging unit 28. Specifically, the measurement sound generation unit 27
generates the measurement sound 83 so as to be an overtone of the fundamental frequency of
the singing voice.
09-05-2019
26
[0088]
The signal merging unit 28 adds the audio signal of the measurement sound 83 to the voice
signal of the singing voice, and outputs the result to the band pass filter 29 (29a to 29d). As
described above, the singing voice is analyzed to generate the measurement sound 83.
[0089]
Further, as shown in FIG. 11, in the third embodiment, the flow of processing upon generation of
the directional beam 6a directed to the singer 6 eliminates the processing of steps S101 to S112
of the first embodiment, and step The processing of steps S309 to S317 is added between S119
and step S120. Hereinafter, only the process of added steps S309 to S317 will be described.
[0090]
As shown in FIG. 11, when the collected voice signal collected by the microphone 2 is input in
step S309, is the voice signal analysis unit 26 detecting an increase in the sound pressure level of
the collected voice signal? Find out. If a sharp rise in the sound pressure level of the collected
voice signal is detected (S310: Yes), the process proceeds to step S311.
[0091]
In step S311, the measurement sound generation unit 27 generates an audio signal of the
measurement sound 83 at a harmonic of the fundamental frequency of the collected sound
signal, and proceeds to step S312. At this time, the sound pressure level of the measurement
sound 83 is determined based on the sound pressure level of the collected sound signal.
[0092]
In step S312, the signal merging unit 28 adds an audio signal of the measurement sound 83 to
the collected sound signal, and the process proceeds to step S313. At this time, the signal
09-05-2019
27
merging unit 28 adds so that the audio signal of the measurement sound 83 is masked over time
by the collected voice signal.
[0093]
The collected sound signal to which the audio signal of the measurement sound 83 is added in
step S313 is output to the band pass filter 29. The band pass filter 29 passes only the audio
signal of the measurement sound 83 from the collected sound signal and outputs the signal to
the level detection unit 151. Then, when the audio signal of the measurement sound 83 is
detected by the level detection unit 151 (S314: Yes), the timer unit 152 starts the timer (S315),
and the process proceeds to step S316.
[0094]
In step S316, the audio signal of the measurement sound 83 output from the band pass filter 29
is output to the mixer 20. The mixer 20 adds the audio signal of the measurement sound 83 to
the emitted sound signal, and proceeds to step S317. At this time, the measurement sound 83 is
added to the emitted sound signal so that the speakers SP1, SPn at both ends of the speaker
array 3 are emitted.
[0095]
In step S317, these emitted sound signals are emitted from the speakers SP1 to SPn via the
corresponding D / A converter 21 and AMP 22, and the process proceeds to step S318. This
emitted sound signal becomes a directional beam 6 a and emitted to the singer 6. The processes
after step S318 are the same as the processes after step S120 in the first embodiment.
[0096]
As mentioned above, when the accompaniment sound 81 is not included in the karaoke music,
the karaoke apparatus 1 emits the singing voice of the singer 6 to the table 7a on which the
group of the singer 6 and the singer 6 is seated. And emit guide vocals 84 to the other tables 7b7d. Moreover, the karaoke apparatus 1 emits the measurement sound 83 by using the singing
09-05-2019
28
voice of the singer 6 as a masker. Also, the measurement sound 83 is generated in a frequency
band that is difficult for human beings to perceive. Thereby, the measurement sound 83 can be
masked over time using the singing voice of the singer 6 as a masker. For this reason, the singer
6 and the customer in the store 5 can be made not to perceive the measurement sound 83.
[0097]
In the third embodiment, the singing voice of the singer 6 is used as a masker. However, not only
this but guide vocal 84 may be used as a masker.
[0098]
As mentioned above, in the karaoke apparatus 1 which concerns on 3rd Embodiment, even if it is
a karaoke music which does not contain the accompaniment sound 81, such as a cappella, it is
possible to analyze singing voice and to pronounce the measurement sound 83. Thus, as in the
first and second embodiments, the microphone position can be detected, and the directional
beam 6 a can be emitted to the singer 6.
[0099]
Next, the case where a band elimination filter or a notch filter or a comb filter is used instead of
the low pass filters 12 and 17 will be described. In addition, in order to simplify the description,
it will be described that the measurement sound 83 is present in the pass band of the band pass
filters 14, 19, 29. Moreover, although demonstrated based on 1st Embodiment, these filters can
apply also to 2nd, 3rd embodiment.
[0100]
When the band elimination filter is used, the measurement sound 83 can be cut by making the
attenuation band of the band elimination filter the same as the pass band of the band pass filters
14, 19 and 29. As a result, since the frequency band through which the accompaniment sound
81 passes is wider than that of the low pass filters 12 and 17, the accompaniment sound 81 with
better sound quality can be emitted.
09-05-2019
29
[0101]
Further, by providing a certain degree of bandwidth in the attenuation band of the band
elimination filter, it is possible to generate a plurality of measurement sounds 83 of different
frequencies in the attenuation band. Thus, measurement sounds 83 having different frequencies
can be adapted to the speakers SP1 and SPn at both ends of the speaker array 3.
[0102]
When the notch filter is used, the measurement sound 83 can be cut by making the dip of the
notch filter the same as the peaks of the band pass filters 14, 19 and 29. Since the dip of the
notch filter is a narrow band, it is possible to emit an accompaniment sound 81 with better sound
quality than using a band elimination filter.
[0103]
Furthermore, when the comb filter is used, the measurement sound 83 can be cut by making the
dip of the comb filter the same as the peak of the band pass filters 14, 19 and 29. Since the comb
filter has a plurality of dips, it can generate a measurement sound 83 consisting of a plurality of
different frequencies. As a result, the measurement sounds 83 having different frequencies can
be applied to the speakers SP1 and SPn at both ends of the speaker array 3, and the sound
quality of the accompaniment sound 81 can be improved.
[0104]
In the first to third embodiments, four different frequency components are extracted using the
band pass filters 14, 19 and 29. However, the present invention is not limited to this, and two or
more for the left and right speakers SP1 and SPn. Since it is only necessary to extract frequency
components, one or more band pass filters 14, 19 and 29 may be provided.
[0105]
09-05-2019
30
In the first to third embodiments, the pass band of the low pass filters 12 and 17 is 15 kHz or
less, and the measurement sound 83 is detected within the range of 15 kHz to 20 kHz.
However, the present invention is not limited to this, and the low band may be the pass band of
the low pass filters 12 and 17 from the frequency band in which the measurement sound 83 is
detected. For example, if the measurement sound 83 is generated at 17 kHz to 18 kHz or the like,
the pass band of the low pass filters 12 and 17 is 17 kHz or less.
[0106]
Further, in the first to third embodiments, an example in which the measurement sound 83 is
emitted from the speakers SP1 and SPn at both ends of the speaker array 3 has been described.
However, the present invention is not limited to this, and the measurement sound 83 may be
emitted from two of the speakers SP1 to SPn constituting the speaker array 3. This makes it
possible to detect the position of the microphone 2 using trigonometry.
[0107]
Furthermore, in the first to third embodiments, since the outputs from the MIDI sound source 91
and the guide vocal reproduction unit 92 are analog audio signals, the A / D converter 11 is
provided. However, not limited to this, when the outputs from the MIDI sound source 91 and the
guide vocal reproducing unit 92 are digital audio signals, the A / D converter 11 may not be
provided.
[0108]
As described above, the karaoke apparatus 1 according to the present invention emits the
accompaniment sound 81 and the singing voice from the speaker array 3 and emits the
measurement sound 83 from the speakers SP1 and SPn at both ends of the speaker array 3. The
karaoke apparatus 1 can detect the microphone position, that is, the position of the singer 6 by
obtaining the elapsed time until the measurement sound 83 is collected by the microphone 2,
and the directivity beam 6a is transmitted to the singer 6 It can always emit noise. Moreover, the
karaoke apparatus 1 can simultaneously mask and time-mask the measurement sound 83 by
comprising the accompaniment sound 81 and the singing voice as a masker and being composed
09-05-2019
31
of harmonics of the fundamental frequency of the masker. As a result, the singer 6 and the
customers in the store 5 can enjoy karaoke without perceiving the measurement sound 83.
Furthermore, since the measurement sound 83 is configured using a frequency band that is
difficult for human to perceive, the singer 6 and the customer inside the store 5 do not perceive
the measurement sound 83 more.
[0109]
It is a figure explaining the inside of a restaurant. It is explanatory drawing of the microphone
position detection method. It is explanatory drawing about selection of a masker. It is an
explanatory view about addition of a measurement sound. It is a functional block diagram of a
karaoke apparatus. It is a flowchart which shows the production ¦ generation procedure of a
directional beam in, when a measurement sound is included in a karaoke tune. It is a functional
block diagram of the karaoke apparatus which concerns on 2nd Embodiment. It is a flowchart
which shows the production ¦ generation procedure of the directional beam in, when producing ¦
generating a measurement sound based on an accompaniment sound. It is a flowchart which
shows the other production ¦ generation procedure of the directional beam in, when producing ¦
generating a measurement sound based on an accompaniment sound. It is a functional block
diagram of the karaoke apparatus which concerns on 3rd Embodiment. It is a flowchart which
shows the production ¦ generation procedure of a directional beam in, when producing ¦
generating a measurement sound based on singing voice.
Explanation of sign
[0110]
1-Karaoke apparatus, 2-microphone, 3-speaker array, 4-monitor, 5-in-store, 6-singer, 6a, 70a to
70d-directional beam, 7 (7a to 7d)-table, 10-control unit , 11, 16-A / D converter, 12, 17-low
pass filter, 13, 18-beam forming section, 14 (14a-14d), 19 (19a-19d), 29 (29a-29d)-band pass
filter, 15-microphone position detection unit, 20-mixer, 21-D / A converter, 22-AMP, 23-MIDI
signal analysis unit, 24-measurement sound MIDI signal generation unit, 25-MIDI signal merge
unit, 26-voice signal analysis Part, 27-measurement sound generation part, 28-signal merge part,
81-accompaniment sound, 82-guide melody, 83-measurement sound, 84-guide vocal, 91-MIDI
sound source, 92-guide boll Cull reproduction unit, 100 - operating unit, 151,153- level detector,
152- timer, 154- microphone position calculating unit, 155-beam forming coefficient calculation
unit, SP1〜SPn- speaker
09-05-2019
32