Is it a good idea to use CNN to classify 1D signal?Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?Time steps in Keras LSTMHow does an LSTM process sequences longer than its memory?Convolution operator in CNN and how it differs from feed forward NN operation?Moving from support vector machine to neural network (Back propagation)What is the difference between Machine Learning and Deep Learning?Neural net for predicting pseudo periodic signalHow are SVMs = Template Matching?Machine Learning Classification: 8-Dimensional Time SeriesCan I add data, that my neural network classified, to the training set, in order to improve it?neural netork loss function for hierarchical classificationProbability threshold and signal/noise ratioIs a SVM (+Boost) faster than a NN to train with similar accuracy?Suggestions required from experts for performance improvement for a binary classification problem using timing data

Why isn't everyone flabbergasted about Bran's "gift"?

Feather, the Redeemed and Dire Fleet Daredevil

How do I deal with an erroneously large refund?

Writing a T-SQL stored procedure to receive 4 numbers and insert them into a table

Coin Game with infinite paradox

Could a cockatrice have parasitic embryos?

Is it OK if I do not take the receipt in Germany?

Does a Draconic Bloodline sorcerer's doubled proficiency bonus for Charisma checks against dragons apply to all dragon types or only the chosen one?

How to compute a Jacobian using polar coordinates?

Where to find documentation for `whois` command options?

Why I cannot instantiate a class whose constructor is private in a friend class?

Preserving file and folder permissions with rsync

What is a 'Key' in computer science?

Why is water being consumed when my shutoff valve is closed?

What happened to Viserion in Season 7?

What is /etc/mtab in Linux?

Are there existing rules/lore for MTG planeswalkers?

RIP Packet Format

Eigenvalues of the Laplacian of the directed De Bruijn graph

Is a self contained air-bullet cartridge feasible?

Why did Israel vote against lifting the American embargo on Cuba?

What does こした mean?

What do you call an IPA symbol that lacks a name (e.g. ɲ)?

When speaking, how do you change your mind mid-sentence?



Is it a good idea to use CNN to classify 1D signal?


Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?Time steps in Keras LSTMHow does an LSTM process sequences longer than its memory?Convolution operator in CNN and how it differs from feed forward NN operation?Moving from support vector machine to neural network (Back propagation)What is the difference between Machine Learning and Deep Learning?Neural net for predicting pseudo periodic signalHow are SVMs = Template Matching?Machine Learning Classification: 8-Dimensional Time SeriesCan I add data, that my neural network classified, to the training set, in order to improve it?neural netork loss function for hierarchical classificationProbability threshold and signal/noise ratioIs a SVM (+Boost) faster than a NN to train with similar accuracy?Suggestions required from experts for performance improvement for a binary classification problem using timing data






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








16












$begingroup$


I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?










share|cite







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    Apr 18 at 11:21

















16












$begingroup$


I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?










share|cite







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    Apr 18 at 11:21













16












16








16


10



$begingroup$


I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?










share|cite







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?







neural-networks svm conv-neural-network signal-processing






share|cite







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite




share|cite






New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Apr 17 at 6:00









Fazla Rabbi MashrurFazla Rabbi Mashrur

18115




18115




New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    Apr 18 at 11:21
















  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    Apr 18 at 11:21















$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21




$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21










4 Answers
4






active

oldest

votes


















22












$begingroup$

I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



enter image description here



With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






share|cite











$endgroup$




















    10












    $begingroup$

    You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



    Here is the architecture:



    DeepSleepNet



    There are two parts to the network:




    • Representational learning layers:
      This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


    • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

    At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






    share|cite











    $endgroup$




















      3












      $begingroup$

      FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



      enter image description here






      share|cite









      $endgroup$




















        2












        $begingroup$

        I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



        • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


        • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


        So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



        Here is a rough illustration of this method:



        --------------------------
        - -
        - long 1D sequence -
        - -
        --------------------------
        |
        |
        v
        ==========================
        = =
        = Conv + Pooling layers =
        = =
        ==========================
        |
        |
        v
        ---------------------------
        - -
        - Shorter representations -
        - (higher-level -
        - CNN features) -
        - -
        ---------------------------
        |
        |
        v
        ===========================
        = =
        = (stack of) RNN layers =
        = =
        ===========================
        |
        |
        v
        ===============================
        = =
        = classifier, regressor, etc. =
        = =
        ===============================





        share|cite









        $endgroup$




















          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          22












          $begingroup$

          I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



          enter image description here



          With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






          share|cite











          $endgroup$

















            22












            $begingroup$

            I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



            enter image description here



            With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






            share|cite











            $endgroup$















              22












              22








              22





              $begingroup$

              I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



              enter image description here



              With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






              share|cite











              $endgroup$



              I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



              enter image description here



              With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.







              share|cite














              share|cite



              share|cite








              edited Apr 18 at 12:33

























              answered Apr 17 at 6:54









              TimTim

              60.7k9133230




              60.7k9133230























                  10












                  $begingroup$

                  You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                  Here is the architecture:



                  DeepSleepNet



                  There are two parts to the network:




                  • Representational learning layers:
                    This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                  • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                  At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






                  share|cite











                  $endgroup$

















                    10












                    $begingroup$

                    You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                    Here is the architecture:



                    DeepSleepNet



                    There are two parts to the network:




                    • Representational learning layers:
                      This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                    • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                    At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






                    share|cite











                    $endgroup$















                      10












                      10








                      10





                      $begingroup$

                      You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                      Here is the architecture:



                      DeepSleepNet



                      There are two parts to the network:




                      • Representational learning layers:
                        This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                      • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                      At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






                      share|cite











                      $endgroup$



                      You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                      Here is the architecture:



                      DeepSleepNet



                      There are two parts to the network:




                      • Representational learning layers:
                        This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                      • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                      At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.







                      share|cite














                      share|cite



                      share|cite








                      edited Apr 17 at 14:18

























                      answered Apr 17 at 14:06









                      kedarpskedarps

                      820721




                      820721





















                          3












                          $begingroup$

                          FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                          enter image description here






                          share|cite









                          $endgroup$

















                            3












                            $begingroup$

                            FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                            enter image description here






                            share|cite









                            $endgroup$















                              3












                              3








                              3





                              $begingroup$

                              FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                              enter image description here






                              share|cite









                              $endgroup$



                              FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                              enter image description here







                              share|cite












                              share|cite



                              share|cite










                              answered Apr 17 at 20:51









                              kamptakampta

                              1677




                              1677





















                                  2












                                  $begingroup$

                                  I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                  • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                  • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                  So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                  Here is a rough illustration of this method:



                                  --------------------------
                                  - -
                                  - long 1D sequence -
                                  - -
                                  --------------------------
                                  |
                                  |
                                  v
                                  ==========================
                                  = =
                                  = Conv + Pooling layers =
                                  = =
                                  ==========================
                                  |
                                  |
                                  v
                                  ---------------------------
                                  - -
                                  - Shorter representations -
                                  - (higher-level -
                                  - CNN features) -
                                  - -
                                  ---------------------------
                                  |
                                  |
                                  v
                                  ===========================
                                  = =
                                  = (stack of) RNN layers =
                                  = =
                                  ===========================
                                  |
                                  |
                                  v
                                  ===============================
                                  = =
                                  = classifier, regressor, etc. =
                                  = =
                                  ===============================





                                  share|cite









                                  $endgroup$

















                                    2












                                    $begingroup$

                                    I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                    • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                    • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                    So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                    Here is a rough illustration of this method:



                                    --------------------------
                                    - -
                                    - long 1D sequence -
                                    - -
                                    --------------------------
                                    |
                                    |
                                    v
                                    ==========================
                                    = =
                                    = Conv + Pooling layers =
                                    = =
                                    ==========================
                                    |
                                    |
                                    v
                                    ---------------------------
                                    - -
                                    - Shorter representations -
                                    - (higher-level -
                                    - CNN features) -
                                    - -
                                    ---------------------------
                                    |
                                    |
                                    v
                                    ===========================
                                    = =
                                    = (stack of) RNN layers =
                                    = =
                                    ===========================
                                    |
                                    |
                                    v
                                    ===============================
                                    = =
                                    = classifier, regressor, etc. =
                                    = =
                                    ===============================





                                    share|cite









                                    $endgroup$















                                      2












                                      2








                                      2





                                      $begingroup$

                                      I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                      • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                      • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                      So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                      Here is a rough illustration of this method:



                                      --------------------------
                                      - -
                                      - long 1D sequence -
                                      - -
                                      --------------------------
                                      |
                                      |
                                      v
                                      ==========================
                                      = =
                                      = Conv + Pooling layers =
                                      = =
                                      ==========================
                                      |
                                      |
                                      v
                                      ---------------------------
                                      - -
                                      - Shorter representations -
                                      - (higher-level -
                                      - CNN features) -
                                      - -
                                      ---------------------------
                                      |
                                      |
                                      v
                                      ===========================
                                      = =
                                      = (stack of) RNN layers =
                                      = =
                                      ===========================
                                      |
                                      |
                                      v
                                      ===============================
                                      = =
                                      = classifier, regressor, etc. =
                                      = =
                                      ===============================





                                      share|cite









                                      $endgroup$



                                      I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                      • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                      • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                      So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                      Here is a rough illustration of this method:



                                      --------------------------
                                      - -
                                      - long 1D sequence -
                                      - -
                                      --------------------------
                                      |
                                      |
                                      v
                                      ==========================
                                      = =
                                      = Conv + Pooling layers =
                                      = =
                                      ==========================
                                      |
                                      |
                                      v
                                      ---------------------------
                                      - -
                                      - Shorter representations -
                                      - (higher-level -
                                      - CNN features) -
                                      - -
                                      ---------------------------
                                      |
                                      |
                                      v
                                      ===========================
                                      = =
                                      = (stack of) RNN layers =
                                      = =
                                      ===========================
                                      |
                                      |
                                      v
                                      ===============================
                                      = =
                                      = classifier, regressor, etc. =
                                      = =
                                      ===============================






                                      share|cite












                                      share|cite



                                      share|cite










                                      answered Apr 17 at 19:53









                                      todaytoday

                                      31119




                                      31119













                                          Popular posts from this blog

                                          Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

                                          Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

                                          Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020