close

Вход

Забыли?

вход по аккаунту

JP2008085472

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2008085472
The present invention provides a sound source localization / identification device capable of
realizing practical calculation speed while facilitating implementation of two functions of sound
source localization and sound source identification. A sound source localization unit (8) for
identifying a direction of a sound source and a sound source identification unit (9) for identifying
a type of sound source. The sound source localization unit (8) and the sound source identification
unit (9) A sound source localization / identification device 1 comprising a neural network
comprising: and each pulse neuron model comprising a digital circuit. [Selected figure] Figure 3
Sound source localization and identification device
[0001]
The present invention relates to an apparatus for performing sound source localization and
sound source identification (hereinafter referred to as sound source localization and
identification apparatus ), and more particularly to a sound source localization and
identification apparatus using a pulse neuron model.
[0002]
The basic functions of grasping the surrounding environment by sound are sound source
localization to identify the sound source direction and sound source recognition (sound source
identification) to identify the sound source type. The sound source recognition device (sound
source identification device) using a neural network There are those described in Non-Patent
Document 1, Non-Patent Document 2, and Patent Document 1.
04-05-2019
1
Further, as a sound source localization device using a neural network, there are devices described
in Non-Patent Document 3, Non-Patent Document 4 and Non-Patent Document 5 below.
Furthermore, as a thing using the time difference detection mechanism of a sound source
localization device as a pre-processing mechanism of a sound source recognition device, there
are some which are described in the following nonpatent literature 6.
[0003]
The applicant of the present application for a time difference detector for sound source
localization is Patent Document 2 listed below. Further, there are the following non-patent
documents 7, 8 and 9 as related documents. Patent No. 3164100 gazette Japanese Patent
Application No. 2005-362915 "Study on sound source recognition using a pulse neuron model",
Nagoya Institute of Technology 1997 Graduation thesis, March 1998 Takuya Sakaguchi, Sho
Kuroyanagi, Akira Iwata, "Environment "Source identification system for grasping", Technical
Report of Research Institute of the Institute of Electronics, Information and Communication
Engineers, IEICE Technical Report, The Institute of Electronics, Information and Communication
Engineers, December 1999, NC 99-70, p. 61-68 Susumu Kuroyanagi and Akira Iwata, "Source
direction perception by pulse transmission type auditory neural network model-Extraction of
time difference and sound pressure difference-", IEICE Technical Report, The Institute of
Electronics, Information and Communication Engineers, March 1993 , NC 92-149, p. Kuroyanagi,
A. Iwata, "Supervised Learning Rule for Pulsed Neuron Model", IEICE Technical Report, IEICE
Information Institute, March 1998, NC 97-151, p. 95-102 Susumu Kuroyanagi, Koichi Hirata,
Akira Iwata, "Competitive Learning Method for Pulsed Neural Networks," IEICE technical report
on the Institute of Electronics, Information and Communication Engineers, The Institute of
Electronics, Information and Communication Engineers, March 2002, NC 2001-210 , P. 113-120
Hiroyuki Nakao, Susumu Kuroyanagi, Akira Iwata, "Sound image extraction model using
directional information of sound source by pulse neural network," IEICE technical report of the
Institute of Electronics, Information and Communication Engineers, The Institute of Electronics,
Information and Communication Engineers, 2001 3 Moon, NC 2000-108, p. 39-46 Tanaka
Akihisa, Kuroyanagi Susumu, Iwata Akira, "A Hardware Implementation Method of Neural
Network for FPGA", IEICE Technical Report, Institute of Electronics, Information and
Communication Engineers, The Institute of Electronics, Information and Communication
Engineers, March 2001, NC 2000-179, p. 175-182 Niimu Nobuyoshi, Kuroyanagi Susumu, Iwata
Akira, Implementation Method of Pulsed Neuron Model for FPGA, IEICE Technical Report
211, p. 121-128 Susumu Kuroyanagi, Akira Iwata, "Competitive Learning Neural Network Using
Pulsed Neuron Model for Auditory Information Processing System," Transactions of the Institute
of Electronics, Information and Communication Engineers (D-II), July 2004, J87-D Volume II, No.
04-05-2019
2
7, p. 1496-1504
[0004]
By the way, when it is desired to install a sound source localization apparatus and a sound source
identification apparatus, for example, in a residence and to distinguish the directions and types of
various sounds generated in the residence, the sound source localization apparatus and the
sound source identification apparatus are one apparatus. Although it is desirable to do so, there
is a problem that it is not easy to implement two functions of sound source localization and
sound source identification in one device.
[0005]
In addition, when trying to realize these functions with software on a computer, a huge amount
of operations will be performed sequentially in the CPU, so the execution speed will be
significantly reduced, and there is a problem that a practical operation speed can not be realized.
The
[0006]
SUMMARY OF THE INVENTION The present invention solves the above-mentioned problems,
and facilitates the implementation of two functions of sound source localization and sound
source identification, and provides a sound source localization / identification device capable of
realizing a practical calculation speed. To aim.
[0007]
The sound source localization / identification apparatus according to the present invention
includes a sound source localization unit that identifies the direction of the sound source, and a
sound source identification unit that identifies the type of the sound source, and the sound
source localization unit and the sound source identification unit are both plural. The present
invention is characterized in that each of the pulse neuron models is constituted by a digital
circuit.
[0008]
In the sound source localization / identification apparatus of the present invention, the sound
source localization unit and the sound source identification unit are both configured by a neural
network provided with a plurality of pulse neuron models, and each pulse neuron model is
04-05-2019
3
configured by a digital circuit. Therefore, it can be realized by mounting a large number of
common elements such as a pulse neuron model on a device such as an FPGA, and it is easy to
implement two functions of sound source localization and sound source identification.
And since the operation in the pulse neuron model is executed in parallel on the digital circuit, a
practical operation speed can be realized.
[0009]
Hereinafter, an embodiment of the present invention will be described based on the drawings.
[0010]
As shown in FIG. 1, the sound source localization / identification apparatus 1 includes left and
right microphones 2 and 3, and a main body 4 to which the microphones 2 and 3 are connected.
The main unit 4 includes the display unit 5.
[0011]
As shown in FIG. 2 and FIG. 3, the main unit 4 is a sound source localization connected to both
the left and right input signal processing units 6 and 7 respectively connected to the
microphones 2 and 3 and the input signal processing units 6 and 7. And a sound source
identification unit 9 connected to the input signal processing unit 7.
The sound source identification unit 9 may be connected to at least one of the input signal
processing units 6 and 7.
The sound source localization unit 8 includes a time difference feature detection unit 10 and a
localization CONP algorithm unit 11.
The sound source identification unit 9 includes the identification CONP algorithm unit 12 at the
04-05-2019
4
front stage and the identification CONP algorithm unit 13 at the rear stage. Note that the sound
source localization / identification device 1 performs learning and then recognizes a sound
source, and there are parts where the operation differs between learning and recognition, so FIG.
3 shows learning and FIG. 3 shows recognition. ing.
[0012]
The input signal processing units 6 and 7, the time difference feature detection unit 10, the
localization CONP algorithm unit 11, the identification CONP algorithm unit 12, and the
identification CONP algorithm unit 13 all have a plurality of pulse neuron models (hereinafter
referred to as
I say "PN model". And a neural network configured by The PN model is a neuron
model that uses a pulse train as an input / output signal. In the sound source localization /
identification device 1, each PN model is configured by a digital circuit.
[0013]
FIG. 4 shows a schematic view of the PN model. In this PN model, when the pulse in (t) = 1
arrives from the nth input channel, the local membrane potential pn (t) of the nth synapse rises
by the coupling weight wn, and then the resting potential with the time constant τ Attenuates.
The internal potential I (t) of the PN model is expressed as the sum of each local membrane
potential at that time. The PN model fires (that is, generates an output pulse 1 ) when the
internal potential becomes equal to or higher than the threshold θ. However, since there is a
refractory period RP related to firing in nerve cells, also in this PN model, a certain period from
firing to RP does not fire even when the internal potential exceeds the threshold. Hereinafter, the
PN model is simply referred to as a neuron.
[0014]
In the sound source localization / identification apparatus 1, the PN model is configured by a
digital circuit and implemented in an FPGA (Field Programmable Gate Array). FIG. 5 shows an
implementation example of the PN model in the FPGA. About this implementation example, since
it describes in the said nonpatent literature 8 grade ¦ etc., Detailed description is abbreviate ¦
omitted. According to this implementation example, since the multiplier is not used for the
mechanism of attenuation processing, it is suitable for implementation on a digital circuit.
04-05-2019
5
[0015]
As shown in FIG. 6, the input signal processing units 6 and 7 use a band pass filter (BPF) to
decompose an input signal into a signal for each frequency component, and an input signal from
the cochlea model by performing nonlinear conversion. And a cochlear nerve model that converts
the input signal from the hair cell model into a pulse train having a pulse frequency proportional
to the signal strength. Ru. That is, the input signal processing units 6 and 7 convert each of the
left and right input signals into a pulse train having a pulse frequency corresponding to the
signal strength for each frequency component.
[0016]
The time-difference feature quantity extraction unit 10 includes an extraction model as shown in
FIG. 7 for each frequency component. The extraction model is also described in the above nonpatent document 5 and the like, so detailed description will be omitted. This extraction model has
a train of PN models for each frequency component, and uses the time delay element to input the
left signal (pulse train generated from the left input signal) sequentially from one end of the train
of PN models. , Input the right signal (a pulse train generated from the right input signal) from
the other end of the column. Each PN model is configured to fire only when the left and right
signals are simultaneously input. Thus, the time difference extraction unit 10 outputs a unique
pattern that changes according to the time difference between the input signals.
[0017]
The localization CONP algorithm unit 11, the identification CONP algorithm unit 12, and the
identification CONP algorithm unit 13 are all referred to as a competitive learning neural
network (hereinafter referred to as "CONP") described in the above non-patent document 9. ) Is
used.
[0018]
CONP is a vector quantization network using only PN model, aiming at dimensional compression
of multi-dimensional vector and absorption of pattern variation using representative vector in
04-05-2019
6
auditory information processing system, and is a competitive learning model of Kohonen and self
organization Self-Organizing Maps This is called "SOM". Is applied to a pulse neural network.
[0019]
The operation flow of competitive learning and recognition processing using the conventional
SOM is shown in FIG. This figure shows an operation flow in the case where an input vector (ndimensional data vector) xi having input pulses as elements is inputted to each neuron of a pulse
neural network having M neurons via n channels. It is. When the input vector xi is input (S01),
this neural network calculates the evaluation value 1 / ¦ wj−xi ¦ of each neuron (S02). Note that
wj is a reference vector of the neuron (vector having coupling weights as elements). The
evaluation value of the neuron increases as the Euclidean distance between the reference vector
wj and the input vector xi decreases. Next, among the neurons, those having the largest
evaluation value (hereinafter, also referred to as "winner neurons"). (S03), and in the learning
phase, update the connection weights so that the reference vector wj of the winner neuron
approaches the input vector xi (S04), and similarly for the neurons in the vicinity of the winner
neuron Is updated (S05). Then, the label j of the neuron with the largest evaluation value is
output (S06). If learning has already been completed and recognition is actually performed, that
is, if it is not the learning phase, the connection weights are not updated. Then, the coefficients
for updating the connection weights (updating of the reference vector) are updated, and the
processes of steps S01 to S06 are performed on the next input vector (S07).
[0020]
In the SOM algorithm, a neuron whose reference vector is closest to the input vector is the
winner neuron, and not only the reference vector of the winner neuron is made closer to the
input vector, but the reference vector of the neurons around the winner neuron is made closer to
the input vector. As a result, in SOM, vector quantization can be performed while maintaining the
phase relationship of the input vector group.
[0021]
The SOM algorithm will be described using FIGS. 9-1 to 9-3. In these figures, the light gray part
represents the input space, and the numbered circle represents the reference vector. As shown in
FIG. 9-1, the purpose of the SOM algorithm is to quantize the input space, and its method is to
04-05-2019
7
represent the center of each subspace by a reference vector. Therefore, as shown in FIG. 9-2, in
the SOM algorithm, at the time of learning, the reference vector maintains its similarity (phase
relation) by updating the reference vector not only for the winner neuron but also for the
surrounding neurons. It will move to each subspace as it is, and after learning, the reference
vectors will be arranged in the order of the similarity of the input space. That is, as a result of
learning, in the conventional competitive learning, as shown in the left diagram of FIG. 9-3,
vector quantization is performed regardless of the similarity of input vectors, but in the SOM
algorithm, FIG. As shown to the right of FIG. 3, vector quantization holding the similarity relation
of the input vector is performed. That is, input vectors close to each other are represented by
reference vectors close to each other, and input vectors far from each other are represented by
reference vectors far from each other.
[0022]
In CONP, learning is performed by the SOM algorithm. However, in CONP, it is not the Euclidean
distance whether the input vector is close to the input vector or not, but the inner product EV of
the input vector and the reference vector EV = cosθ ¦ w ¦¦ x ¦ (w: reference vector, x: input
vector, θ: both vectors Evaluation), and the neuron with the highest evaluation value is the
winner neuron. Since the internal potential is the sum of the local membrane potentials and the
magnitude of the local membrane potential is proportional to the coupling weight and
proportional to the frequency of the input pulse, the evaluation by the inner product of the input
vector and the reference vector is the evaluation by the internal potential It is equivalent.
[0023]
In addition, in CONP, in order to search for the neuron with the highest evaluation value, only the
neuron with the highest evaluation value is configured to fire. Specifically, as shown in FIG. 10, a
plurality of state detection neurons, that is, non-firing detection neurons that fire when none of
the competitive learning neurons fires (hereinafter referred to as NFD neurons ). And a
multiple learning detection neuron (hereinafter referred to as "MFD neuron") that fires when two
or more competitive learning neurons are firing. The two state detection neurons are provided,
and the threshold value of the competitive learning neuron is uniformly changed in accordance
with the firing state of the state detection neurons, thereby maintaining the situation where only
one competitive learning neuron fires. When it is necessary to distinguish between NFD neurons,
MFD neurons, etc., neurons that perform competitive learning are referred to as competitive
learning neurons (competitive learning neurons).
04-05-2019
8
[0024]
11-1 and 11-2 show the operation flow of CONP. In CONP, an input vector x (t) = (x1 (t), x2 (t),...,
Xi (t),..., Xn (t)) consisting of n data pulses is input per unit time. (S101). Here, t is time. Then, in
CONP, the output value ynfd (t) of the NFD neuron is calculated, and the output value ymfd (t) of
the MFD neuron is calculated (S102, S103). Next, the internal potentials Ij (t) (j = 1,..., M) of M
competitive learning neurons possessed by CONP are calculated (S104), and for neurons whose
internal potential Ij (t) exceeds the threshold TH, , Y (t) = 1, and y (t) = 0 for the other neurons (S
105). Then, the connection weight is updated for the neuron outputting "1" (S106), and the
connection weight is also updated for the neuron in the vicinity of the neuron (S107), and the
reference vector is normalized to norm 1 (S108). If it is not the learning phase, the connection
weights are not updated. Then, the coefficients for updating the connection weights are updated,
and the processes of steps S101 to S108 are performed on the next input vector (S109).
[0025]
The calculation method in CONP will be described. First, in order to clarify the operation of the
PN model in CONP, the following (Equation 1) and (Equation 2) are defined as follows.
[0026]
The system is a discrete time system of sampling frequency Fs, and Δt = 1 / Fs (Δ: delta). Here, a
function F having four of time t, decay time constant τ, coupling weight w, and input signal x (t)
at time t is introduced as an argument, and defined as the following (Equation 1).
[0027]
Then, the internal potential I (t) of the PN model at time t can be described as the following
(Equation 2) as the sum of the local membrane potentials pi (t).
[0028]
Where τ is the decay time constant of pi (t).
04-05-2019
9
Assuming that the refractory period of the PN model is RP, the elapsed time from the previous
firing at time t is ET (t), and ET (0)> RP, the output value y (t) of the PN model is calculated by the
following algorithm Ru.
[0029]
if I (t) ≧ TH and ET (t)> RP then y (t) = 1, ET (t) = 0 else y (t) = 0, ET (t) = ET (t−Δt) + Δ The t
parameters τ, w 1, w 2,..., wn, TH are variable values according to each PN model, and the
operation of each PN model is determined by this combination.
[0030]
In CONP, the internal potential I (t) of the neuron is used as an evaluation value of the similarity
of the input vector in each neuron.
As described above, the evaluation by the inner product of the input vector and the reference
vector is equivalent to the evaluation by the internal potential. Then, as described above, only the
neuron with the highest evaluation value is configured to fire using the state detection neuron.
As described above, in CONP, a competitive learning neuron that has fired in the network is set
as a winner neuron, so that learning is performed when each neuron fires. As a method of
expressing the input pattern to be learned, the local membrane potential pcwi (t) at the synapse
in which the connection weight is fixed to 1 is used. FIG. 5 shows a competitive learning pulse
neuron model in which elements necessary for learning are added to FIG. Here, the number of
input pulse trains is n, and this neuron is the h-th of M competitive learning neurons. In FIG. 5,
the outputs of the NFD neuron 11 and the MFD neuron 12 at time t are ynfd (t) and yfmd (t),
respectively, and the connection weights for the competitive learning neuron to the NFD neuron
11 and MFD neuron 12 are wfd and -wfd (where , Wfd> 0), the internal potential Ih (t) of the h-th
competitive learning neuron at time t can be described as in the following (Equation 3) using the
function F described above.
[0031]
Since the control is performed by treating pnfd and pmfd as the dynamic change amount of the
firing threshold in CONP, it is assumed that the decay time constant τfd is sufficiently larger
04-05-2019
10
than the time constant τ. The update of the connection weight wwin, i (u) of the winner neuron
at time u can be expressed by the following (Equation 4), where α is a learning coefficient.
[0032]
After each update, the combined weight vector w (u) = (wwin, 1 (u),..., Wwin, n (u)) is normalized
such that the norm is one.
[0033]
When the total amount of the internal potential generated by the input pulse train fluctuates
significantly, a change in the threshold occurs to absorb the fluctuation, and the change in the
threshold may not follow the change in the direction of the input vector.
Therefore, CONP suppresses the change of the internal potential with respect to the norm
fluctuation of the input signal by subtracting the total of pcwi with the internal potential I (t) in
advance at a constant ratio β pcw (where 0 ≦ β pcw ≦ 1). doing. As a result, Ih (t) in equation
(3) above is corrected as in equation (5) below.
[0034]
With the above algorithm, it is possible to realize Kohonen's competitive learning in a pulse
neural network, and even if the spectrum pattern etc. included in the input signal changes
momentarily, it is learned statistically and vector quantization is performed. It is possible. The
SOM algorithm can be easily realized by applying the connection weight update described so far
to the neurons in the vicinity of the winner neuron.
[0035]
At the time of learning, the localization CONP algorithm unit 11 vector-quantizes singular
patterns which are extraction results extracted by the time-difference feature quantity extraction
unit 10 in order of similarity. That is, a reference vector group is created that can represent
unique patterns while maintaining their similarity. As a result, in the localization CONP algorithm
unit 11, the mapping to the reference vector holding the similar relationship of the singular
04-05-2019
11
patterns is performed.
[0036]
Then, as shown in FIG. 3, at the time of recognition, the localization CONP algorithm unit 11
maps the singular pattern to the reference vector, and as a localization result, the firing signal
from the neuron having the reference vector representing the singular pattern is obtained.
Output.
[0037]
The identification CONP algorithm unit 12 detects a frequency pattern present in the input signal
input from the input signal processing unit 7, and vector quantizes the frequency pattern present
in the input signal.
[0038]
The identification CONP algorithm unit 13 performs supervised learning by LVQ (Learning
Vector Quantization) at the time of learning.
In LVQ supervised learning, learning by SOM algorithm is performed in the first phase, and the
result is labeled in the second phase.
As the teacher signal, sound type information of the input signal is used. By this learning, the
identification CONP algorithm unit 13 can perform vector quantization on the pattern vectorquantized by the identification CONP algorithm unit 12 for each sound source type, and can label
the sound source type.
[0039]
Then, at the time of recognition, the identification CONP algorithm unit 13 further vector
quantizes the pattern (vector-quantized) detected by the identification CONP algorithm unit 12
for each sound source type, and identifies a label according to the result Output as a result.
[0040]
For example, there are two frequency patterns that sound like
04-05-2019
12
Pee
and
Pow
in the sound
of an ambulance, but the vector quantization of these two frequency patterns separately is
performed by the CONP algorithm unit 12 for identification. It is the CONP algorithm unit 13 for
identification that combines these two vector-quantized patterns into vector quantization and
outputs a label indicating "ambulance".
[0041]
By configuring as described above, it is possible to realize the sound source localization /
identification device 1 that can substantially simultaneously localize and identify the sound
source when the sound is input through the microphones 2 and 3.
The sound source localization / identification device 1 displays the localization result and the
identification result on the display unit 5.
[0042]
The sound source localization / identification device 1 is provided with a communication unit,
and the localization result and the identification result are transmitted, for example, to a
notification device carried by the user through the communication unit, and the notification
result is The identification result may be notified to the user.
[0043]
Implementation examples of the sound source localization unit 8 and the sound source
identification unit 9 on an FPGA are shown in (1) to (7) below.
[0044]
(1) The mounted device is an FPGA device Stratix II EP2S60 manufactured by Altera, and the
minimum operating frequency of the mounted circuit is 64 kHz.
[0045]
(2) The external interface is USB 2.0, with 16 input bits and 13 output bits.
[0046]
(3) The number of input frequency channels is 15 for sound source localization and 43 for sound
04-05-2019
13
source identification.
[0047]
(4) The time-difference feature quantity extraction unit 10 has 15 frequency channels and 21
neurons per channel.
[0048]
(5) Localization CONP algorithm unit (hereinafter, also referred to as
extraction result mapping unit .
source localization
11) The number of input channels of each competitive learning neuron is 317 (including two
inputs from the state detection neuron), and the number of competitive learning neurons is 7.
[0049]
(6) Identification CONP algorithm unit (hereinafter, also referred to as
identification frequency detector ).
source pattern
12) The number of input channels of each competitive learning neuron is 45 (including two
inputs from the state detection neuron), and the number of competitive learning neurons is 10.
[0050]
(7) The identification CONP algorithm unit (hereinafter also referred to as
detection pattern mapping unit ).
source identification
13.) The number of input channels of each competitive learning neuron is 12 (including two
inputs from the state detection neuron), and the number of competitive learning neurons is 6.
04-05-2019
14
[0051]
In this configuration, the required number of circuits (unit: ALUT) is as shown in Table 1.
[0052]
Note that the external interface unit is a portion serving as an interface for inputting signals from
the input signal units 6 and 7 to the time difference feature value extraction unit 10 and the
CONP algorithm unit 12 for identification.
[0053]
As shown in Table 1, the total number of required circuits in this mounting example is 35,144
(ALUTs), while the mountable number of circuits of the device EP2S60 is 48,352 (ALUTs). It can
be seen that the entire circuit of part 9 can be mounted on the FPGA.
[0054]
In the sound source localization / identification device 1, the sound source localization unit 8 and
the sound source identification unit 9 are both configured by a neural network including a
plurality of pulse neuron models, and each pulse neuron model is configured by a digital circuit.
ing.
That is, the sound source localization unit 8 and the sound source identification unit 9 can be
realized by mounting a large number of common elements such as a pulse neuron model on a
device such as an FPGA.
For this reason, it is easy to implement two functions of sound source localization and sound
source identification in one device.
Then, since the operation in the pulse neuron model is executed in parallel on the digital circuit, a
practical operation speed can be realized.
[0055]
04-05-2019
15
FIG. 1 is a perspective view of a sound source localization and identification device according to
an embodiment of the present invention.
It is a block diagram of a sound source localization and identification device according to an
embodiment of the present invention, and shows a time of learning.
It is a block diagram of a sound source localization and identification device concerning one
embodiment of the present invention, and shows the time of recognition.
It is a schematic diagram of PN model.
It is a block diagram which shows the example of implementation of PN model in FPGA. It is a
block diagram which shows the structure of an input signal processing part. It is a schematic
diagram of the extraction model of a time difference feature-value extraction part. It is a
flowchart which shows the conventional SOM algorithm. It is a figure for demonstrating a SOM
algorithm. It is a figure for demonstrating a SOM algorithm. It is a figure for demonstrating a
SOM algorithm. It is a schematic diagram of CONP. It is a flowchart which shows the flow of the
process in CONP. It is a flowchart which shows the flow of the process in CONP.
Explanation of sign
[0056]
1 ... source localization and identification device 8 ... source localization part 9 ... source
identification part
04-05-2019
16
1/--страниц
Пожаловаться на содержимое документа