close

Вход

Забыли?

вход по аккаунту

JP2007158985

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2007158985
PROBLEM TO BE SOLVED: To enable three-dimensional sound reproduction using this music
data even if the music data is not created assuming three-dimensional sound reproduction.
SOLUTION: In a three-dimensional sound effect addition process 13, music data specified by an
operation of an operation unit 41 among music data in a music storage area 61 is read out, and
three-dimensional sound reproduction is performed using information included in the music
data. The three-dimensional sound effect information used for control of is generated, and the
music data added with the three-dimensional sound effect information is stored in the music
storage area 61. [Selected figure] Figure 2
Device and program for adding stereophonic sound effect in music reproduction
[0001]
The present invention relates to an apparatus and program for adding a three-dimensional sound
effect in reproducing music such as a ringing tone of a mobile phone terminal.
[0002]
With the spread of mobile phone terminals, services for distributing ringing tones for users of
mobile phone terminals are being actively performed.
And, recently, the user of the mobile telephone terminal has come to seek a sense of reality also
with respect to the reproduction of the ringing tone. For this reason, recently, mobile phone
10-05-2019
1
terminals provided with a three-dimensional sound reproduction function have also been
provided. A mobile phone terminal of this type is disclosed, for example, in Patent Document 1.
JP-A-2005-101988 JP-A-6-165299
[0003]
Now, as a user of a mobile phone terminal provided with a stereophonic sound reproducing
function, since the mobile phone terminal is provided with an angular stereophonic sound
reproducing function, this can be used to make a realistic reproduction of a desired ringing tone.
I want to have fun. However, there may be cases where there is no music data for stereophonic
sound reproduction as music data for a desired ringing tone, and only music data for monaural
reproduction can be downloaded to the mobile phone terminal. In such a case, there is a problem
that the user can not enjoy the reproduction of the realistic ringing tone utilizing the threedimensional sound reproduction function. Moreover, even if it is music data for threedimensional sound reproduction, there are cases where the added three-dimensional sound effect
is small and the user is unsatisfactory.
[0004]
The present invention has been made in view of the above-described circumstances, and is the
case where the music data is not created on the assumption of three-dimensional sound
reproduction or is three-dimensional sound reproduction music data with little three-dimensional
sound effect. However, it is an object of the present invention to provide a technical means
capable of performing stereophonic sound reproduction using this music data.
[0005]
The present invention relates to storage means for storing music data, and stereophonic sound
effect information used to control reproduction of stereophonic sound based on information
satisfying a predetermined condition among the information constituting the music piece data
stored in the storage means. And providing stereophonic sound effect adding means for
outputting music data to which stereophonic sound effect information has been added.
According to this invention, even if the music data does not include the stereo sound effect
information, the stereo sound effect information can be added to the music data to perform
stereo sound reproduction. The embodiments of the present invention also include the aspect of
10-05-2019
2
distributing to a user a program that causes a computer to function as a stereophonic sound
effect adding device. The embodiment of the present invention also includes an embodiment in
which the sound source device generates stereophonic sound effect information using the
information contained in the music data and controls the reproduction of the stereophonic
sound.
[0006]
Hereinafter, embodiments of the present invention will be described with reference to the
drawings. In the following embodiments, an example in which the three-dimensional sound effect
adding device according to the present invention is embodied as a mobile phone terminal will be
described. However, the three-dimensional sound effect adding device according to the present
invention can be embodied as various electronic devices for reproducing music, such as a
personal computer provided with a sound source capable of reproducing three-dimensional
sound as well as a mobile phone terminal.
[0007]
First Embodiment FIG. 1 is a diagram showing a configuration of an entire communication
system including a mobile telephone terminal 1 according to the present embodiment. As shown
in FIG. 1, each mobile phone terminal 1 can be connected to a large scale network 3 including a
telephone network, the Internet, etc. via a base station 2 in the same area as a general mobile
phone terminal. Connected to the large scale network 3 is a distribution server 4 which holds a
database of ringing tones. The user can access the distribution server 4 with the mobile phone
terminal 1 and download music data of a desired ringing tone to the mobile phone terminal 1.
[0008]
FIG. 2 is a block diagram showing the configuration of the mobile telephone terminal 1 according
to the present embodiment. The CPU 10 is a processor that controls the entire mobile phone 1.
The communication unit 20 establishes a wireless link with the base station 2 via the antenna 21
under the control of the CPU 10 when originating a call from the mobile phone terminal 1 or
receiving an incoming call to the mobile phone terminal 1 A communication link is established
between the communication partner apparatus (not shown) via the link and the large scale
network 3, and communication is performed with the communication partner apparatus. The
10-05-2019
3
voice processing unit 30 is a device that receives voice information of the other party via the CPU
10 during a call, outputs the voice information to the speaker 31 as a sound, and delivers the
voice information of the user obtained by the microphone 32 to the CPU 10.
[0009]
The operation unit 41 is a device for receiving various commands and information from the user,
and is configured of various push buttons provided on the operation surface of the mobile phone
terminal 1 and a sensor that detects an operation state thereof. The display unit 42 is a device for
providing the user with image information such as various messages, and is configured by an
LCD panel or the like.
[0010]
Under the control of the CPU 10, the tone source unit 50 is a device that forms tone signals of L
and R 2 channels representing tones such as ringing tones and outputs them as tones to the
speakers 51L and 51R. The sound source unit 50 in the present embodiment has a stereophonic
sound reproducing function of reproducing from the speakers 51L and 51R musical tones whose
sound image is localized at the virtual sound source position designated by the CPU 10.
[0011]
FIG. 3 is a block diagram showing a configuration example of the sound source unit 50. As shown
in FIG. The sound source unit 50 includes a note distribution processing unit 52, m musical tone
formation processing units 53, a virtual sound source assignment processing unit 54, n virtual
sound source processing units 55, and adders 56L and 56R. ing.
[0012]
The note distribution processing unit 52 is a device that performs processing for distributing the
Note Message to any of m musical tone formation processing units 53 when a Note Message
(described later), which is control information for instructing musical tone formation, is received
from the CPU 10. is there. Here, the m tone forming units 53 respectively have tone forming
10-05-2019
4
channel numbers from Ch = 0 to Ch = m-1. On the other hand, Note Message includes a tone
forming channel number for designating a tone forming processing unit 53 to which the process
is to be performed. The note distribution processing unit 52 determines the tone generation
processing unit 53 which is the distribution destination based on the tone generation channel
number included in the Note Message received from the CPU 10. Each of the m musical tone
formation processing units 53 is a device that forms musical tone signals in accordance with the
Note Message given via the note distribution processing unit 52.
[0013]
The virtual sound source allocation processing unit 54 is a device that distributes the musical
tone signal formed by the musical tone formation processing unit 53 to any one of n virtual
sound source processing units 55. Here, the n virtual sound source processing units 55 each
have an ID from ID = 0 to ID = n−1. When the CPU 10 causes one of the virtual sound source
processing units 55 to process a musical tone signal formed by the musical tone formation
processing unit 53 having a certain musical tone formation channel, the ID of the virtual sound
source processing unit 55 that performs processing with the musical tone formation channel.
And a sound source unit 50 to be associated with each other. The virtual sound source
assignment processing unit 54 distributes the musical tone signal to the appropriate virtual
sound source processing unit 55 indicated by the 3D ch Assign Message.
[0014]
Each virtual sound source processing unit 55 is supplied with a 3D position (described later),
which is control information for instructing the localization of the sound image of the speaker
reproduction sound, from the CPU 10. Each virtual sound source processing unit 55 performs
arithmetic processing using parameters corresponding to the virtual sound source position
indicated by the 3D position for musical tone signals of one or a plurality of channels given via
the virtual sound source assignment processing unit 54 (specifically, Filter processing combining
delay processing and attenuation processing is performed to generate musical tone signals of L
and R 2 channels that localize the sound image of the speaker reproduction sound at the virtual
sound source position indicated by the 3D position.
[0015]
10-05-2019
5
The tone signals of the L and R2 channels obtained by the virtual tone source processing units
55 are added together by the adders 56L and 56R, respectively. The musical tone signals output
from the adders 56L and 56R are respectively supplied to the speakers 51L and 51R. Signal
processing for obtaining reproduced sound in which a sound image is localized at an arbitrary
virtual sound source position is known and disclosed in, for example, Patent Document 2.
[0016]
In FIG. 2, the storage unit 60 is a device that stores various programs executed by the CPU 10
and various data, and is configured by a ROM, a RAM, and the like. The storage unit 60 has a
music storage area 61 for storing various music data. The music data of the ringing tone
downloaded from the distribution server 4 described above is stored in the music storage area
61.
[0017]
The music data stored in the music storage area 61 in the present embodiment is performance
control data (sequence data) such as SMF (Standard MIDI File) and SMAF (Synthetic music Mobile
Application Format). In this type of performance control data, information for instructing one
performance control is event information for instructing an event which is control content and an
execution timing of the event, more specifically, an elapsed time from the execution timing of the
beginning of the music or the preceding event. It consists of a set of duration information
indicating time.
[0018]
FIG. 4 exemplifies the content of music data in the SMAF format. In FIG. 4, each event performed
according to music data is shown in order of execution. In FIG. 4, one row represents the content
of the event designated by one event information and the execution timing of the event
designated by duration information preceding the event information.
[0019]
10-05-2019
6
In each row (event) shown in FIG. 4, the element in the column of "Event" indicates the type of
the event indicated by the event information, and the element in the column of "Description"
indicates various parameters used to execute the event. Also, the elements in the column "Ch"
indicate channels where the application of the event is limited to a particular tone forming
channel. Further, elements in each column of Tick and Time indicate the execution
timing of the event. Here, "Time" represents an actual elapsed time from the start of the music to
the execution timing of the event, while "Tick" is a value obtained by converting the elapsed time
into a clock count value of a predetermined cycle. The elements in the "Duration" column are
values of duration information that indicate the execution timing of the event. The elements in
the Gatetime column are information specific to Note Message, which is control information
for instructing the formation of a tone, and indicate the duration of sound generation.
[0020]
The music data shown in FIG. 4 is music data created on the assumption of stereophonic sound
reproduction, and includes one 3D position and four 3D ch Assigns as event information for
controlling the stereophonic sound reproduction. . Here, as described above, 3D position is event
information that indicates a virtual sound source position. The 3D position in this example is
addressed to the virtual sound source processing unit 55 with ID = 0. For this virtual sound
source processing unit 55, the virtual sound source position is -30 degrees azimuth angle with
respect to the user, elevation angle 0 In the direction of degree, it is instructed to move from the
initial position to a position separated by a distance of 2 m, taking a moving time of 2000 Tick.
When the 3D position is given to the sound source unit 50 by the CPU 10, the virtual sound
source processing unit 55 in which the ID of the sound source unit 50 is ID 0 starts processing
for moving the virtual sound source position as instructed by the 3D position. Although 3D
position in this example indicates moving the virtual sound source position, if moving time is 0,
the virtual sound source position is fixed at one point in the space determined by the distance,
azimuth, and elevation angle. It becomes event information to instruct that.
[0021]
The four 3D ch Assigns indicate that each musical tone signal output from the musical tone
formation processing unit 53 whose musical tone formation channel ch is 0 to 3 is distributed to
the virtual sound source processing unit 55 where ID = 0. After these 3D Assigns there is a series
of Note Messages with specification of tone forming channels ranging from 0-3. After sending
the four 3D ch Assigns to the sound source unit 50, the CPU 10 gives the sound source unit 50 a
series of these Note Messages. In this case, the virtual sound source allocation processing unit 54
10-05-2019
7
of the sound source unit 50 generates musical tone signals formed according to the Note
Message in the musical tone formation processing units 53 of the musical tone formation
channels ch = 0 to 3 according to 3D ch Assign ID = It is supplied to the virtual sound source
processing unit 55 of zero. The virtual sound source processing unit 55 receives the 3D position
described above, and has already started the process of moving the virtual sound source position.
Therefore, each musical tone signal formed according to a series of Note Messages is given to the
speakers 51L and 51R as musical tone signals of L and R2 channels through the processing by
the virtual tone source processing unit 55. As a result, the speaker reproduction sound in which
the localization of the sound image moves in accordance with the 3D position described above is
obtained.
[0022]
As described above, 3D position and 3D ch Assign play a role in controlling stereophonic sound
reproduction. So, in this embodiment, these event information is called stereophonic sound effect
information. In the present embodiment, the music storage area 61 of the storage unit 60 may
store music data including such stereophonic sound effect information, but may also store music
data not including stereophonic sound effect information. The feature of this embodiment is that
the portable telephone terminal 1 is provided with a mechanism for generating stereophonic
sound effect information suitable for the latter music data mainly and adding it to the original
music data.
[0023]
In FIG. 2, various types of processing executed by the CPU 10 are shown in a box representing
the CPU 10. These are processes executed by the CPU 10 according to the program stored in the
storage unit 60.
[0024]
The communication control processing 11 performs control for causing the communication unit
20 to perform the above-described processing for establishing the communication link at the
time of call origination and call reception, and voices voice information from the other party
received by the communication unit 20 It is a process of delivering to the processing unit 30 and
controlling the communication unit 20 to send voice information of the user of the mobile phone
10-05-2019
8
terminal 1 supplied from the voice processing unit 30 to the called party. Further, in the
communication control process 11, when there is an incoming call, the sequencer 12 is
instructed to reproduce the ringing tone.
[0025]
The sequencer 12 reads the music data for reproduction designated in advance by the user from
the music data in the music storage area 61 when the reproduction control of the ringing tone is
given from the communication control processing 11, and according to the music data, This is
processing to control the unit 50 to form a tone signal of the ringing tone sound. More
specifically, in this process, when reading music data, after reading out certain event information
and sending it to the sound source unit 50, if the duration information is read, it waits for the
time designated by the duration information to elapse. The operation of reading out the next
event information is repeated to control the formation of the tone signal by the sound source unit
50.
[0026]
In the three-dimensional sound effect addition processing 13, the music data specified by the
operation of the operation unit 41 among the music data in the music storage area 61 is read,
converted into the music data to which appropriate three-dimensional sound effect information is
added, and converted The music data of is stored in the music storage area 61. The program for
the three-dimensional sound effect addition process 13 is written in advance in the ROM of the
storage unit 60 when the mobile phone 1 is manufactured in a preferred embodiment. In another
aspect, the program for the stereophonic sound effect addition process 13 is downloaded by the
user who has purchased the mobile phone terminal 1 from the predetermined site on the
Internet to the storage unit 60.
[0027]
The three-dimensional sound effect addition process 13 is a process unique to this embodiment.
There are the following three types of processing for addition of stereophonic sound information
that can be executed in this stereophonic sound addition process 13. When the CPU 10 executes
the three-dimensional sound effect addition process 13, the user can designate one or more of
these processes by the operation of the operation unit 41.
10-05-2019
9
[0028]
a. Perspective effect addition processing In this perspective effect addition processing, the
sound intensity information (for example, velocity, volume) included in the original music data is
converted into a distance to the virtual sound source position, and the virtual sound source is
separated from the user by that distance. The stereophonic sound effect information for setting
the position is generated and added to the original music data.
[0029]
b. Movement effect addition processing In this movement effect addition processing, a
periodical change such as note information included in the original music data is extracted, and a
stereophonic sound effect in which the virtual sound source position is periodically moved in
synchronization with the period of the change. Information is generated and added to the
original music data. Trajectories for moving the virtual sound source position periodically include
linear trajectories extending in the left and right direction in front of the user, circular
trajectories surrounding the user, elliptical trajectories surrounding the user, etc. Trajectory
definition information regarding these trajectories ( A function or the like for calculating the
coordinates of each point on the orbit is stored in the storage unit 60. The user can preselect a
desired trajectory from among them by operating the operation unit 41. In this movement effect
addition process, a virtual sound source position to be assigned to each periodically changing
note is calculated based on the track definition information of the track designated by the user,
and a three-dimensional sound localization of the speaker reproduction sound at the virtual
sound source position is calculated. Sound effect information is generated.
[0030]
c. Doppler Effect Addition Process In this Doppler effect addition process, pitch control
information such as a pitch bend event included in the original music data is converted into
stereophonic sound effect information in which the virtual sound source position is brought
closer to the user from a distance or Convert and add to music data. The above is the details of
the configuration of the mobile phone terminal 1 in the present embodiment.
10-05-2019
10
[0031]
FIG. 5 is a flowchart showing an outline of processing contents of the stereophonic sound
addition processing 13 in the present embodiment. The operation of the present embodiment will
be described below with reference to this figure. The stereophonic sound effect addition process
13 is started by the user designating a song by operating the operation unit 41 and instructing
conversion for stereophonic sound effect addition. First, in step S1, conversion conditions are set.
Specifically, a screen is displayed on the display unit 42 to inquire about a processing desired to
be executed among the perspective effect addition processing, the movement effect addition
processing, and the Doppler effect addition processing described above, and the user's
instruction is acquired via the operation unit 41 . At this time, the user may indicate one of
perspective effect addition processing, movement effect addition processing, and Doppler effect
addition processing, or may indicate two or all of them. Further, in step S1, when execution of the
movement effect addition process is instructed by the user, the display unit 42 displays a screen
inquiring a trajectory for moving the virtual sound source position, and the user's instruction is
acquired via the operation unit 41 Do.
[0032]
Next, in step S2, the music data of the music specified by the user is read out from the music
storage area 61 and analyzed. Specifically, when the execution of the perspective effect addition
process is instructed, the velocity, the volume, and the like are obtained as conversion targets to
the stereophonic sound effect information, and when the execution of the Doppler effect addition
process is instructed. , Pitch bend event information, etc. are determined as conversion targets to
stereophonic sound effect information. Further, when execution of the movement effect addition
process is instructed, the location of information having periodicity such as a series of Note
Messages whose pitch is periodically changed in the music data is obtained.
[0033]
Next, in step S3, perspective effect addition processing, movement effect addition processing or
Doppler effect addition processing is executed based on the conversion target event information
or periodical information obtained in step S2, and stereophonic sound effects are applied to the
original music data. Music data to which information is added is generated and stored in the
music storage area 61. The generated music data may be overwritten on the original music data
in the music storage area 61, or may be stored in the music storage area 61 with a file name
10-05-2019
11
different from that of the original music data. The user instructs which storage method to use by
operating the operation unit 41. The above is the processing content of the three-dimensional
sound effect addition processing 13.
[0034]
FIG. 6 shows an execution example of the perspective effect addition process. In this example, the
original music data before conversion includes Note Message whose velocity value Vel is 100
and Note Message whose velocity value Vel is 50. By executing the perspective effect addition
processing, the velocity values of both Note Messages are aligned to 50. Then, before the Note
Message in which the original velocity value Vel is 100, 3D Position which instructs the virtual
sound source processing unit 55 with ID = 0 to set the virtual sound image position at a distance
of 1 m from the user Further, 3D ch Assign is added which instructs to assign the tone signal
formed in accordance with the Note Message to the virtual sound source processing unit 55 of ID
= 0. Also, before the Note Message in which the original velocity value Vel is 50, 3D Position
instructs the virtual sound source processing unit 55 with ID = 0 to set the virtual sound image
position at a distance of 2 m from the user. Further, 3D ch Assign is added which instructs to
assign the tone signal formed in accordance with the Note Message to the virtual sound source
processing unit 55 of ID = 0. As a result of the perspective effect addition process being
performed as described above, the strength (velocity value) of the sound expressed by the
original music data is converted into a long distance between the user and the virtual sound
source position.
[0035]
FIG. 7 shows an example of execution of movement effect addition processing. In this example,
since the music data before conversion includes repetitions of four Note Messages that
periodically change the pitch in the order of A → B → C → D, these periodic Note Messages It is
extracted as a target of movement effect addition processing. By performing movement effect
addition processing, a pair of 3D position and 3D ch Assign is added in front of these Note
Messages. In this example, since the linear trajectory has been selected by the user, each
stereophonic sound effect information added in front of each Note Message indicates the
localization of the sound image of the tone to be generated according to the Note Message
behind each The contents are periodically changed in the order of right → middle → left →
middle along a linear trajectory running in the direction. In this example, although the pitch
indicated by the original music data repeats a change of A → B → C → D, temporarily after the
pitch changes as A → B → C → D, for example, A → B → C. Even if there is a change in the order
10-05-2019
12
of → E, it can be regarded that the change of the pitch A → B → C is repeated periodically.
Therefore, the movement effect addition process is also performed when the music data indicates
a change in pitch as in the latter case, and stereophonic sound effect information for periodically
changing the localization of the sound image is added before the periodic Note Message. Be done.
[0036]
FIG. 8 shows an example of execution of the Doppler effect addition process. In the original
music data before conversion, before each of two Note Messages, Pitch Bend: + which indicates
that the pitch should be increased with the passage of time, and Pitch Bend which indicates that
the pitch is reduced. :-Is arranged. By performing Doppler effect addition processing, the former
Pitch Bend: + is replaced with stereophonic sound effect information that requires 500 ticks and
is moved to the vicinity of the user after setting the position of the sound image far from the
user, The latter Pitch Bend:-is set to the position of the sound image in the vicinity of the user
and then replaced with stereophonic sound effect information which requires 500 ticks and is
moved to the distance of the user. As described above, the execution of the Doppler effect
addition process converts the information for raising or lowering the sound of the original music
data into the expression for bringing the sound sources closer or further away.
[0037]
As described above, according to the present embodiment, even if the acquired music data is
music data not created on the assumption of three-dimensional sound reproduction, appropriate
three-dimensional sound effect information is added to the music data. There is an effect that
stereophonic sound can be reproduced.
[0038]
Second Embodiment FIG. 9 is a block diagram showing a configuration of a mobile telephone
terminal 1A according to a second embodiment of the present invention.
FIG. 10 is a block diagram showing the configuration of the sound source unit 50A in the mobile
phone terminal 1A. In these figures, parts corresponding to those shown in FIG. 2 and FIG. 3 are
given the same reference numerals, and the description thereof is omitted.
10-05-2019
13
[0039]
In the first embodiment, the music data stored in the music storage area 61 of the storage unit
60 is sequence data such as SMAF. On the other hand, in the present embodiment, in the music
storage area 61A of the storage unit 60A, PCM sample data of tone waveform is compressed by a
specific compression format such as MP3 (MPEG Audio Layer-3) or AAC (Advanced Audio
Coding). The stored music data is stored. The CPU 10A performs processing for adding
stereophonic sound effects to music data in the music storage area 61A.
[0040]
The CPU 10A has a function of executing each process of the communication control process 14,
the decoder 15, the analysis process 16, the three-dimensional sound effect information
generation process 17 and the sequencer 18. When a piece of music data in the music storage
area 61A is designated as a processing target by the operation of the operation unit 41 and an
instruction to add a stereophonic sound effect is given, the CPU 10A generates a decoder 15, an
analysis process 16 and a stereophonic sound information generation. Each process of process
17 is executed.
[0041]
The decoder 15 reads out music data to be processed from the music storage area 61, expands it,
and delivers it to the analysis processing 16 as musical tone waveform data. In analysis
processing 16, the expanded tone waveform data is analyzed by a known analysis method, and
strength information indicating the temporal change of the amplitude of the tone waveform
along with the progress of the music, and the periodic change of the pitch in the music Period
information indicating the period of occurrence of the period and the state of periodic change
(for example, the number of musical tones constituting one period), and the period information
indicating a period in which the pitch is continuously changed due to pitch bend in the music
Generate Then, the analysis processing 16 delivers the strength information, the period
information, and the section information obtained in this manner to the three-dimensional sound
effect information generation processing 17.
[0042]
10-05-2019
14
In the three-dimensional sound effect information generation process 17, three-dimensional
sound effect information is generated based on the information handed over from the analysis
process 16. The three-dimensional sound effect information is data to be synchronously
reproduced along with the original music data, and when reproduction of the music is performed
based on the music data, an instruction to apply various three-dimensional sound effects
regarding the music reproduction sound is given Sequence data. The three-dimensional sound
effect information generation processing 17 generates, for example, the following contents as the
three-dimensional sound effect information. First, in the three-dimensional sound effect
information generation process 17, three-dimensional sound effect information for moving the
virtual sound source position to near or far according to the strength of the sound indicated by
the strength information during reproduction of the music is generated. Further, in the threedimensional sound effect information generation process 17, in the section indicated by the
period information during reproduction of the music, the three-dimensional sound effect
information giving periodic change indicated by the period information to the virtual sound
source position is generated. Further, in the three-dimensional sound effect information
generation process 17, three-dimensional sound effect information is generated while moving
away from or approaching the virtual sound source position in the section indicated by the
section information while reproducing the music. Then, in the three-dimensional sound effect
information generation process 17, the generated three-dimensional sound effect information is
stored in the music storage area 61 in association with the music data to be processed. The
generated three-dimensional sound effect information may be stored in the music storage area
61 as one music file together with the original music data.
[0043]
On the other hand, the CPU 10A executes the communication control processing 14, the decoder
15, and the sequencer 18 when the mobile phone terminal 1A receives an incoming call. In this
case, in the communication control processing 14, the decoder 15 is instructed to read out and
decompress music data for a ringing tone, and the sequencer 18 is instructed to reproduce
stereophonic sound effect information associated with the music data. As a result, the decoder 15
and the sequencer 18 synchronously reproduce the music data for the ringing tone and the
stereophonic sound effect information associated with the music data. More specifically, the
decoder 15 reads the music data from the music storage area 61, expands the music data, and
sends the resulting tone waveform data to the tone generator 50A. During this time, the
sequencer 18 reads the stereophonic sound effect information associated with the music piece
data from the music piece storage area 61, and sets the event information included in the
stereophonic sound data at the timing specified by the duration information included in the
10-05-2019
15
stereophonic sound data. Send to section 50A.
[0044]
The sound source unit 50A, as shown in FIG. 10, does not have a unit equivalent to the note
distribution processing unit 52 and the tone formation processing unit 53 in the first
embodiment. In the sound source unit 50A, the virtual sound source assignment processing unit
54A distributes tone waveform data supplied from the decoder 15 to any one of n virtual sound
source processing units 55. In the example shown in FIG. 10, tone waveform data of L and R2
channels are applied to virtual sound source assignment processing unit 54A, but tone waveform
data applied to virtual sound source assignment processing unit 54A may be tone waveform data
of one channel. . Event information of stereophonic sound effect information is supplied from the
sequencer 18 to the virtual sound source allocation processing unit 54A. The event information
includes the 3D ch Assign Message described in the first embodiment. As described above, the
3D ch Assign Message includes the channel of the musical tone waveform data to be processed
(in this example, the L channel or R channel) and the ID of the virtual sound source processing
unit 55 that processes the musical tone waveform data. Event information that associates the The
virtual sound source assignment processing unit 54A distributes the tone waveform data
supplied from the decoder 15 to the virtual sound source processing unit 55 in accordance with
this 3D ch Assign Message. The configuration and function of the portion after the virtual sound
source processing unit 55 are the same as those of the first embodiment.
[0045]
FIG. 11 shows an example of stereophonic sound effect information reproduced by the sequencer
18. The meanings of Event, Description, Ch, Tick, and Duration are as described in the first
embodiment. In this embodiment, at the timing when the reproduction of the tone waveform data
of the ringing tone is started by the decoder 15, Tick = 0 and Time = 0, and the reproduction by
the sequencer 18 of the three-dimensional sound effect information is started. In this example,
the 3D position Message instructs the virtual sound source processing unit 55 of ID = 0 to
execute virtual sound source processing for moving the virtual sound source position by
requiring 2000 ticks. The subsequent 3D ch Assign Message indicates that 55 Tick assigns
virtual sound source processing of tone waveform data of L channel to the virtual sound source
processing unit 55 of ID = 0. Further, the 3D ch Assign Message after that instructs to assign
virtual sound source processing of tone waveform data of R channel to the virtual sound source
processing unit 55 of ID = 0 at 60 Tick. For this reason, in the virtual sound source processing
unit 55, virtual sound source processing for moving the virtual sound source position of the L
10-05-2019
16
channel musical sound from 55 Tick to 2000 Tick and moving the virtual sound source position
of the R channel musical sound for 60 Tick to 2000 Tick is performed. It will be.
[0046]
According to the present embodiment, by executing the decoder 15, the analysis process 16 and
the three-dimensional sound effect information generation process 17, the above threedimensional sound effect information is generated from the music data and is stored in the music
storage area 61 together with the original music data. Is saved. Then, at the time of an incoming
call, synchronous reproduction of music data and stereophonic sound effect information is
performed. Therefore, as in the first embodiment, even if the original music data does not
correspond to the stereophonic sound reproduction, it is possible to reproduce the music with
the stereophonic sound effect in which the virtual sound source position dynamically moves. .
[0047]
As mentioned above, although 1st Embodiment and 2nd Embodiment of this invention were
described, other embodiment besides this can be considered to this invention. For example:
[0048]
(1) In the first embodiment, the velocity value Vel is converted to the distance to the virtual
sound source position in the perspective effect addition process, but the master volume values
for specifying the volume of the entire song are different among the songs and are the same Even
in the case of music, there are cases where channel volume, which is a volume indication value in
units of tone forming channels, differs between tone forming channels. Therefore, in the
perspective effect addition processing, for each Note Message, the master volume value, the
channel volume value of the tone forming channel to which that Note Message belongs, and the
velocity value of that Note Message make the sound corresponding to that Note Message strong.
In this case, stereophonic sound effect information may be generated in which the sound
intensity is replaced with the distance to the virtual sound source position. In this case, if there
are many types of velocity values and channel volume values included in the music data, a large
amount of stereophonic sound effect information is added by the perspective effect addition
processing, and the data amount of the music data can be significantly increased. There is sex.
Therefore, in order to avoid such inconveniences, the perspective effect addition process
10-05-2019
17
generates a limited number of 3D positions that indicate representative virtual sound source
positions in space, and the Note Message indicates the strength of the sound. The 3D position
that most closely approximates V.sub.2 may be assigned, and the intensity of sound that can not
be expressed by the 3D position may be dealt with by adjusting the velocity value of the Note
Message or the channel volume value. The same applies to the Doppler effect processing.
[0049]
(2) The movement effect addition process needs to analyze the entire music data and extract
periodical information, but the perspective effect addition process and the Doppler effect
addition process do not require analysis of the entire music data . Therefore, if there is no need
to execute the movement effect addition process, the following embodiment may be possible.
That is, the CPU 10 supplies the event information of the music data as it is to the sound source
unit 50 without executing the three-dimensional effect addition processing, the sound source unit
50 analyzes the music data supplied from the CPU 10 in real time, and performs the perspective
effect addition processing When event information to be subjected to Doppler effect addition
processing is found, processing equivalent to perspective effect addition processing or Doppler
effect addition processing is performed using the event information, and event information
obtained as a result is used. The tone signal is formed.
[0050]
(3) In each of the above embodiments, the stereophonic sound effect addition process 13 is
applied to the music data acquired by the mobile phone terminal 1 via the large scale network 3,
but the subject that executes the stereophonic sound effect addition process 13 is the music It
does not have to be a device that acquires and uses data. For example, the cellular phone
terminal 1 sends music data to a predetermined server via the large-scale network 3 in
accordance with an instruction from the user, and the server performs stereophonic sound effect
addition processing on the music data, and stereophonic sound information is added. There may
be an embodiment in which music data is sent back to the mobile phone terminal 1.
[0051]
BRIEF DESCRIPTION OF THE DRAWINGS It is a figure which shows the structure of the whole
communication system containing the mobile telephone terminal provided with the function as
10-05-2019
18
an acoustic effect addition apparatus which is 1st Embodiment of this invention. It is a block
diagram which shows the structure of the mobile telephone terminal in the embodiment. It is a
block diagram which shows the structure of the sound source part in the same mobile telephone
terminal. It is a figure which shows the example of the music data used in the embodiment. It is a
flowchart which shows the content of the stereophonic sound effect addition process in the
embodiment. It is a figure which shows the example of execution of the perspective effect
addition process in the embodiment. It is a figure which shows the example of execution of the
movement effect addition process in the embodiment. It is a figure which shows the example of
execution of the Doppler effect addition process in the embodiment. It is a block diagram which
shows the structure of the mobile telephone terminal which is 2nd Embodiment of this invention.
It is a block diagram which shows the structure of the sound source part of the mobile telephone
terminal. It is a figure which shows the example of the three-dimensional sound effect
information in the embodiment.
Explanation of sign
[0052]
1, 1A: mobile phone terminal, 10, 10A: CPU, 20: communication unit, 21: antenna, 30: voice
processing unit, 31: speaker 32, 32: microphone, 41: operation unit, 42: display unit, 50: sound
source unit, 51L, 51R: speaker, 60, 60A: storage unit, 61, 61A: music storage area, 11, 14:
communication control processing 12, 18,. Sequencer, 13, 17 ...... 3D sound effect addition
processing, 15 ...... decoder, 16 ... analysis processing.
10-05-2019
19
1/--страниц
Пожаловаться на содержимое документа