The Open Materials Science Journal




(Discontinued)

ISSN: 1874-088X ― Volume 13, 2019

Design Ratio-Memory Cellular Neural Network (RMCNN) in CMOS Circuit Used in Association-Memory Applications for 0.25 mm Silicon Technology



Jui-Lin Lai1, *, Chung-Yu Wu2
1 Department of Electronic Engineering, National United University, MiaoLi City 36003, Taiwan
2 Department of Electronic Engineering, National Chiao-Tung University, Hsinchu 300, Taiwan

Abstract

The paper is proposed the Ratio-Memory Cellular Neural Network (RMCNN) that structure with the self-feedback and the modified Hebbian learning algorithm. The learnable RMCNN architecture was designed and realized in CMOS technology for associative memory neural network applications. The exemplar patterns can be learned and correctly recognized the output patterns for the proposed system. Only self-output pixel value in A template and B template weights are updated by the nearest neighboring five elements for all test input exemplar patterns. The learned ratio weights of the B template are generated that the catch weights are performed the summation of absolute coefficients operation to enhance the feature of recognized pattern. Simulation results express that the system can be learned some exemplar patterns with noise and recognized the correctly pattern. The 9×9 RMCNN structure with self-feedback and the modified Hebbian learning algorithm is implemented and verified in the CMOS circuits for TSMC 0.25 µm 1P5M VLSI technology. The proposed RMCNN have more learning and recognition capability for the variant exemplar patterns in the auto-associative memory neural system applications.

Keywords: Auto-Associative Memory, Cellular Neural Network (CNN), Ratio-Memory (RM), Template.


Article Information


Identifiers and Pagination:

Year: 2016
Volume: 10
Issue: Suppl-1, M6
First Page: 54
Last Page: 69
Publisher Id: TOMSJ-10-54
DOI: 10.2174/1874088X01610010054

Article History:

Received Date: 17/06/2015
Revision Received Date: 20/07/2015
Acceptance Date: 20/08/2015
Electronic publication date: 15/07/2016
Collection year: 2016

© Lai and Wu; Licensee Bentham Open.

open-access license: This is an open access article licensed under the terms of the Creative Commons Attribution-Non-Commercial 4.0 International Public License (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/legalcode), which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited.


* Address correspondence to this author at the Department of Electronic Engineering, National United University, MiaoLi City 36003, Taiwan; Tel: +886-37-382510; Fax: +886-37-382498; E-mail: jllai@nuu.edu.tw





INTRODUCTION

The Cellular Neural Network (CNN) has properties for the neighboring cells with locally connected as introduced by Chua and Yang [1Chua LO, Yang L. Cellular neural networks: theory. IEEE Trans Circ Syst 1988; 35(10): 1257-72.
[http://dx.doi.org/10.1109/31.7600]
, 2Chua LO, Yang L. Cellular neural networks: applications. IEEE Trans Circ Syst 1988; 35(10): 1273-90.
[http://dx.doi.org/10.1109/31.7601]
]. Among from the more learning methods was published in the literature. The coefficients of template in the CNN are found by the perception-learning rule. The network can be easily implemented in VLSI for various image operations. Thus, the CNN with the specific template in the image processing have been mentioned [3Zurada JM. Introduction to artificial neural systems. St. Paul, USA: PWS Publishing Company 1992.-7Grassi G. A new approach to design cellular neural networks for associative memory. IEEE Trans Circuits Syst I 1997; 44: 835-8.
[http://dx.doi.org/10.1109/81.622988]
]. It can learn the exemplar patterns and recognize correctly pattern output.

The most designed neural networks were stored the processed patterns has a local minimum of an associative energy function in the associative memory. Associative memory research is an important application in the neural network. The CNN has been integrated learning algorithm to generate associative memories used by the dynamic equation learn algorithm for the image patterns learn and recognition [6Liu D, Michel AN. Cellular neural networks for associative memories. IEEE Trans Circuits Syst II: Analog Digit Signal Process 1993; 40(2): 119-21.
[http://dx.doi.org/10.1109/82.219843]
-10Perfetti R, Costantini G. multiplierless digital learning algorithm for cellular neural networks. IEEE Trans Circ Syst I Fundam Theory Appl 2001; 48: 630-5.
[http://dx.doi.org/10.1109/81.922467]
].

The modified Hopfield network and the discrete Hebbian learning are applied in the neurons of CNN. The Grossberg outstar structure is consisted the ratio memory (RM) to implement the weights of template use in the neural network for various image processing. The weights of the A template is accumulated from the selected cell and it’s the nearest neighboring four cells for the exemplar input patternthat the processed A template is produced by ratio-memory. The structure with associative-memory can be realized that the neural system with 9x9(18x18) array through learn can recognize 3 (5) patterns with Gaussian noise. The 18x18 RMCNN is included coupled A template and the self-feedback from output be able to learn and recognize 87 noisy patterns with noise at variance 0.3 [15Wu CY, Cheng CH. A learnable cellular neural network CNN) structure with ratio memory for image processing. IEEE Trans Circ Syst I Fundam Theory Appl 2002; 49(12): 1713-23.
[http://dx.doi.org/10.1109/TCSI.2002.805697]
-19Lai JL, Chen YH, Wang YL. Design a learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) for associative memory applications. In: IEEE International Symposium on Consumer Electronics ISCE; St Petersburg:. Russia 2006; pp. 1-6.].

The capability and function of the 18×18 RMCNN architecture with coupled the embedded ratio-memory B template and the modified Hebbian learning algorithm is shown and analysed. The structure of the RMCNN with B template and the modified Hebbian learning algorithm are fabricated in the VLSI circuits for the TSMC 0.25 μm 1P5M CMOS technology. The system has ability to learn 8 patterns with white-black noise and successfully recognized. The capability of pattern learning and recognition is improved for the proposed system.

The CNN with reconfigurable was developed to meet the various applications [20Kim YS, Min KS. Synaptic weighting circuits for cellular neural networks. In: Proceeding 13th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA); 2012 Aug 29-31;. Turin: Italy 2012; pp. 1-6.
[http://dx.doi.org/10.1109/CNNA.2012.6331430]
-23Amanatidis D, Dossis M. Use of behavioral synthesis to implement a cellular neural network for image processing applications. In: 2011 Panhellenic Conference on Informatics; 2011 Sep 30-Oct 2; Kastoria: Greece 2011; pp 183-7.
[http://dx.doi.org/10.1109/PCI.2011.11]
]. To design the distinct new synaptic weighting circuit is provided for the pattern recognition, medical detection process and special applications [24Akiduki T, Zhong Z, Takashi I, Tetsuo M. Associative memories with multi-valued cellular neural networks and their application to disease diagnosis. 2009 IEEE International Conference on Systems, Man and Cybernetics; 2009. Oct 11-14; San Antonio, TX, USA. 2009; pp. 3824-9.
[http://dx.doi.org/10.1109/ICSMC.2009.5346618]
-29Chedjou JC, Kyamakya K. A universal concept based on cellular neural networks for ultrafast and flexible solving of differential equations. IEEE Trans Neural Netw Learn Syst 2015; 26(4): 749-62.
[http://dx.doi.org/10.1109/TNNLS.2014.2323218] [PMID: 25794380]
].

The paper arranged as follows. First part presented the RMCNN structure with the modified Hebbian learning and the embedded ratio-memory. The second part, the VLSI structure and its CMOS circuit realization are described. In third part, the analyses of RMCNN and simulation results demonstrate for patterns learning and recognition. Finally, the conclusions are given.

RMCNN ARCHITECTURE

Cellular Neural Network with parallel processing function can expanded in the massive scale to suit the neuron morphic applications. In a two-dimensional CNN, each cell is just only connected to its neighboring cells, the weights expressed the connected line segment, Xij(t) is the state and Yij (t) is the output of a regular cell which can be expressed by mathematic model as equations (1) and (2) [1Chua LO, Yang L. Cellular neural networks: theory. IEEE Trans Circ Syst 1988; 35(10): 1257-72.
[http://dx.doi.org/10.1109/31.7600]
, 2Chua LO, Yang L. Cellular neural networks: applications. IEEE Trans Circ Syst 1988; 35(10): 1273-90.
[http://dx.doi.org/10.1109/31.7601]
].

(1)

(2)

where Ykl(t) is output and Ukl is input of the cell C(k, l) in the r-neighborhood system (Nr(i, j)), zij is threshold value of the cell C(i, j), f is bipolar activation function, and aijkl and bijkl are expressed the weights from Ykl(t) and Ukl for the cell C(i, j), respectively.

In the paper, the proposed CNN structure consisted of the uncoupled A template and coupled B template with r=1 neighborhood. The coefficient of the space-variant B template corresponds to the input and the nearest four neighboring inputs of the cell C(i,j). The fixed (Dirichlet) boundary condition is commonly used in the outermost boundary cells. Those edge cells are preset to zeros. The A template just only takes one self-feedback coefficient. The A and B templates are expressed as

(3)

A CNN include the ratio-memory (RM) that the expressed RM and SRM blocks are used to realize the weights connect from the neighboring cells and the self-cell, respectively, is shown in Fig. (1a). The learned weights are transformed into the ratio weight wbijkl arestored the associative memory in the ratio-memory (RM) model at an equilibrium state as

(4)

The equilibrium state X is globally asymptotical stable based on the locally stable by the energy function and the globally attractive for the CNN behavior model. The RMCNN is realized to store a large set of exemplar patterns for the recurrent auto-associative memory [11Paasio A, Halonen K, Porra V. CMOS implementation of associative memory using cellular neural network having adjustable template coefficients. In: IEEE International Symposium on Circuits and Systems; 1994 Jun May 30-Jun 2; London; England 1994; pp487-490.
[http://dx.doi.org/10.1109/ISCAS.1994.409632]
-17Lai JL, Wu CY. Architectural design and analysis of learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) for nanoelectronics systems. IEEE Trans VLSI Syst 2004; 12(11): 1182-91.]. The RMCNN is training through the input test exemplar patterns to update the weights of the B templates and output the desired pattern. The obtained output pattern is according to the information between input pattern and weight matrix.

Fig. (1)

(a) CNN with RM structure, (b) C(i, j) cell realization [4Sheu BJ, Choi J. Neural information processing and VLSI. Morwell, MA, USA: Kluwer Academic Publishers 1995.
[http://dx.doi.org/10.1007/978-1-4615-2247-8]
].



The variables uij, xij, and yij of CNNs are expressed to voltages and zij is a bias current in the VLSI implementation. The block diagram of a cell is realized by electronic circuits, as shown in (Fig. 1b). The signal flow structure of a cell C(i,j) can be expanded to the standard CNN {A, B, Z} with a single neighborhood cell. The arrows indicate the parallel data paths from the input ukl and the output ykl of the neighboring cells C(k,l), respectively. The thinness line on arrows denotes the threshold zij, input uij, state xij, and output yij, respectively.

In the learned period, the weight-learning algorithm integrated with the modified Hebbian learning algorithm is applied to find the increment dwij of the weight vector in the learning block, is shown in (Fig. 2). The recursive update operation was generated for m distinct patterns at time t=0 as

Fig. (2)

Illustration for weights learning algorithm.



(5)

(6)

The weights of B template updated the values in parallel and the operation is ended until all patterns are learned during the learning period. The weights Wbijkl (0) can be normalized to bounded the weight value. The normalized weights brijkl (0) are stored in the ratio-memory.

In the elapsed period, the voltage on the stored capacitor Czs is gradually decreased by Ileakage of the leakage current. Assume the leakage current is nearly constant leakiness so the stored voltage on capacitor Czsis decreased by the function of time, as

(7)

In the recognition period, the system is used RM to normalize the coefficientsof B template. The operation of normalization is active like as the spatial averaging. The normalized operation of patterns depended on spatial operations for the local neighborhoods of the cell inputs are performed. Using ratio-memory approach to transform the normalized weights brijkl(t) into the ratio weight bijkl of B template as [15Wu CY, Cheng CH. A learnable cellular neural network CNN) structure with ratio memory for image processing. IEEE Trans Circ Syst I Fundam Theory Appl 2002; 49(12): 1713-23.
[http://dx.doi.org/10.1109/TCSI.2002.805697]
].

(8)

The ratio weight bijkl(t) is generated by the M/D block to produce the normalized effect and enhance the feature of the pattern. When the larger intensity of the weight bijkl of cell C(i,j), the weight is gradually increased. Otherwise, the weight gradually decreased. As the simulation results show that the ability of the RMCNN with the ratio-memory was improved for the noisy patterns recognition.

In the recognition state, the cell outputs are adjusted by the learned ratio weights gradually to close one of the features for the training patterns. The energy function of a CNN can be expressed in quadratic form as

(9)

The energy function is also tendency converged into the local minimum until all outputs are no more changed. The final recognized pattern is found at the minima of E in the stable state. In (8), the absolute value is added in the denominator because the connection weight brijkl can be both positive and negative value and the ratio-memory take the total magnitude of each weight element in the ratio-memory to normalize the weight. The stability of the proposed RMCNN needs to further consider an energy function

(10)

Suppose yij is changed, then the change of energy can be expressed as

(11)

The first term ΔERM1 of (11) can be proven to be always non-positive as the proof method in the original Hopfield energy function E.

if Δyij≥0, then

if Δyij≤0, then

Consider the second energy change term ΔERM2

(12)

Because the connection weight in the original Hopfield model is symmetric, i.e. wijkl=wklij, (12) can be written as

(13)

From equation (13), it can also be proven that ΔERM2 is always non-positive. Thus, under the assumption that the connection weight is symmetric in the original Hopfield model, the stable state of the RMCNN model is also existed.

CMOS CIRCUIT REALIZATION

The function block is proposed to implement the relation operation for the equations (1-8). Taking two neighboring cells C(i,j), C(k,l)and their RM block in the RMCNN that the detailed block diagram is shown in Fig. (3). The neuron cell is able to summation the current signals from its neighboring cells and self-cell, the stored neuron signal also out to those cells, as Fig. (1a). The structure of neuron cell is construct three units for the equivalent Rij and Cij storage element, V-I converter (T1), and V-I converter with sign-detect (T2D). The transmitted signals need the ratioed weight is calculated by the RM model. The RM block is consisting of the M/D and S block that the current mode circuit of M/D block is combined four-quadrant multiplier and two-quadrant divider [13Xiu C, Liu Y. Associative memory based on hysteretic neural network. In: International Conference on Control, Automation and Systems Engineering (CASE); 2011 July 30-31; Singapore: Malaysia 2011; pp1-3.
[http://dx.doi.org/10.1109/ICCASE.2011.5997623]
-17Lai JL, Wu CY. Architectural design and analysis of learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) for nanoelectronics systems. IEEE Trans VLSI Syst 2004; 12(11): 1182-91.]. The ratio weight is generated by the distinct product for the neighboring weights in the B template that the result of the product term for the learned weight base on the equation (5) in the learning period.

Fig. (3)

The architecture of two neighboring cells and ratio memory (RM) in the RMCNN.



The storage block (S) described in the detailed block diagram is shown in Fig. (4). Fig. (4a, 4b) are expressed the learning and the recognition operations for the associative memory, respectively. T2L block is used to store the absolute voltage value from Czi to Czs and the latch circuit stored its sign signal. The resistor Rzs in parallel with Czs is induced the inevitable RC time constant leakage. The V-I converter is provided by T3 block to convert the voltage value stored in Czs into current flow. The capacitor Czi use to store the weight ziijklwith absolute resultant during the learning period.

Fig. (4)

Learning and recognition operation in the S block of RMCNN.



The Iypij(Iupkl current product directly charging on Czi generated the weight voltage Vziijkl(0) at t = 0. When the end of learning period then the voltage can be written as

(14)

where Iypij is a current of the pixel of the p-th desired patterns at i-th row and j-th column, Iupkl is a current for the pattern input to the cell C(k, l) at Nr(i, j) neighboring cells, Ib is setting a constant bias current, Vziijkl(0) is stored on Czi voltage of the weight Wijkl at t = 0 sec, and TP is the learning time for each the learned patterns.

In the M/D block, the four-quadrant multiplier and the two-quadrant divider is used to generate the ratioed weight. The product current for the five ratioed weights to the neighboring five input cells current signal was summed during the recognition period. The voltage of cell state Xij(t) is converted through Rij and Cij. As equation (15) is shown the Vxij(t) expression.

(15)

The sign of Vziijklis detected and latched in the dynamic latch CMOS circuits for the T2L block shown in Fig. (5). Integrated the analog multiplier with four-quadrant and the divider with two-quadrant to realize the current mode M/D CMOS circuit shown in Fig. (6). The multiplication input currents I1 and I3 are provided from the PMOS current mirror M14i/M14 and M14i/M15/M16, respectively, and the divider input current I2 is through the M24i/M24 pair. The operational amplifier (op-amp) and NMOS device M21 is combined to a closed-loop feedback form. Base on the properties of the operational amplifier to know the voltage of VE3 and VE4 are virtually identity at the emitter terminal. The PNP bipolar junction transistors (BJTs) Q1, Q2, Q3, and Q4 are adopted to perform the multiplication and division operations depend on the relation between base-emitter voltage VBE and emitter current IE as

Fig. (5)

Sign detector.



(16)
where IS is the saturation current of PN junction and VT is the thermal voltage. The load current I4 is provided through M19, M20, M25, and M26 PMOS current mirrors pairs, and the sink current from the M29 and M30 NMOS current mirror pair to flow the output current Iomd.

The op-amp has an identical input property that the loop voltage VBE1+VBE3 is equal to VBE2+VBE4. Therefore, among the IE1, IE2, IE3, and IE4 can be derived the relationship equation from equation (16) as

(17)

Using XNOR gate to control the flow direction of output current is determined according to the sign of input and weight for the M/D.

Fig. (6)

The CMOS circuit of the M/D block.



The M/D circuit function is simulated through HSPICE shows that the results are correctly verified for the multiplier and divider, are shown in Fig. (7a, b), respectively. Fig. (8) expressed the CMOS circuits of T2, where Fig. (8a) show that the absolute-value V-I converter circuit, the output absolute current vs the input voltage transfer curve is shown in (Fig. 8b). The V-I converter is consist of the CMOS differential amplifier M1~M7 that the source resistance to increase the linear range also to realize for T1 and T3 blocks. The absolute-value circuit is loaded the output current Iovic by M8~M13 to output the direction of the unified flow from the absolute-value current Ioabs.

Fig. (7)

HSPICE simulation results: (a) Multiplication function with I2=20µA, and (b) Division function with I1=6µA.



Fig. (8)

The circuit of T2 and HSPICE simulation results.



SIMULATION RESULTS

The structure of the proposed 18×18 RMCNN included the coupled B template, only self-feedback in A template, and the modified Hebbian learning algorithm at the direct neighborhood (r=1) that the associative memory is simulated by Matlab software. The black pixel is settled to +1 and the white pixel to -1 in the processed pattern for the RMCNN. In the system, assume parameters for the constant leakage current 0.8fA and the associative memory storage capacitor Czs of 2pF are used in the simulation. The weights of B template are processed into ratio weights to operate each cell in the system for the learning period. The English characters A with white-block noise is adopted the test patterns, is shows in Fig. (9). As the simulating results show that the RMCNN with B template can be learned eight noisy patterns with white-block noise and successfully recognized those patterns and the correctly output the desired pattern of the character A. The function of RMCNN is verified and success rate can reach to 97%.

Fig. (9)

Test patterns with white-black noise Elapsed time factor.



The required waiting time for the correctly pattern recognition is related to the elapsed time factor from the simulation results. The elapsed time is normalized 50 second to the elapsed time factor. Observe the error function between the desired pattern and the output pattern that the variation of the error is decreased by the function of the elapsed time factor for each exemplar test patterns during the elapsed time. The error is reduced to zero at the elapsed time factor over 9 for the various test patterns, which is 450 second as shown in Fig. (10).

Fig. (10)

Error functions during elapsed time factor.



The exemplar patterns are presented the normal, expand, left-rotate, right-rotate, and reversal of the A character, as shown in Fig. (11). The desired RMCNN outputs are the pattern A. While the test pattern of the expanded A character into the RMCNN as the first one shown in the Fig. (12). The processing procedure of the recognition is presented from the 2nd to 8th patterns, the last one is the final recognized correctly output pattern. Another test case, using the left-rotate A character in the RMCNN to process the image recognition. In the Fig. (13) show that the resultant of the recognition in sample for the 150 times iterations and the correct pattern at the last one in the figure is recognized. Thus, the RMCNN with A and B template can be recognized by the learned ratio weights for the complex test example pattern.

Fig. (11)

Test patterns of character A with normal, expanded, left-rotate, right-rotate, and reversal types.



Fig. (12)

Recognition sequence of expanded A character during recognition period.



Fig. (13)

Recognition sequence of left-rotate A character during recognition period.



The capability of the pattern recognition has been verified for the proposed 18x18 RMCNN with B template. The function blocks of the architecture are completely designed in the CMOS circuits and also integrated in VLSI technology.

The approach of the ratio-memory used to provide the function for feature enhancement in the image processing. The effect of the ratioed weight is shown that the larger weights are gradually increased to 1 with time. Otherwise, the smaller weights are gradually decreased to zero. The five weights are learned and ratioed for the time of learned and the elapsed period that the variations of the ratioed weights are shown in Fig. (14).

The structure of the proposed 18x18 RMCNN with B template is implemented and simulated in CMOS circuit by the HSPICE. The weights of the B template is sequentially updated for five exemplar patterns per each charge time with the period 0.5μs are applied and the ratioed-memory are used to store the ratioed weights. The incompletely English character H of the exemplar pattern is shown in the left of Fig. (15). Simulation results show that the RMCNN be able to recognize the correctly output pattern for the test input exemplar pattern, as shown in the right of Fig. (15). The system has capability to learning exemplar pattern and recognizing the correct pattern for the image process applications.

Fig. (14)

RMCNN (a) learn data with the five neighboring cells for six data input; (b) the learned weights; (c) learning and ratio state; (d) the ratioed weights during the elapse time.



Fig. (15)

The recognized pattern for 18 x 18 RMCNN.



The chip of 9x9 RMCNN is designed with the learning and recognition operations by the TSMC 0.25μm 1P5M technology that the layout graph is shown in Fig. (16). The layout chip of 9x9 RMCNN includes 81 regular cells, 405 RMs and 81 current summations in wafer area 4000μm x 4200μm is implemented and verified. As simulation results successful verified the 18x18 RMCNN also have the correct function for patterns recognition. The combination of four chips is constructed the 18x18 RMCNN structure to use for auto-associative memory applications.

Fig. (16)

Layout graph for 9x9 RMCNN chip.



CONCLUSION

The paper is proposed and analyzed the ratio-memory cellular neural networks (RMCNN) with B template and self-feedback for pattern recognition in the associative memory. The five weightsin the B template of RMCNN can generated and updated from the exemplar patterns that the weights are ratioed from the absolute summation for the neighborhood cells and stored the weights in the ratioed-memory. To observe the simulation results, the function of 18×18 RMCNN has been verified that system can learned 8 patterns with white-black noise of the character A and correctly recognized the desired pattern. The more than one-desired English characters are learned to reduce the recognition rate. The proposed RMCNN can provided the more ability for learn and recognition than the CNN without RM and its complexity is less than the Hopfield neural network. Moreover, the capability of the pattern learning and recognition for the proposed RMCNN with the modified Hebbian learning algorithm is improved. The architecture of RMCNN is successfully implemented by TSMC 0.25μm 1P5M CMOS silicon technology in VLSI structure. The learnable 9x9 RMCNN VLSI chip can expend into the 18x18 neural networks system to develop more complexity bio-image processing for real-time applications.

CONFLICT OF INTEREST

The authors confirm that this article content has no conflict of interest.

ACKNOWLEDGEMENTS

In this work, authors would like to thank was supported by the Ministry of Science & Technology of Taiwan, ROC. The National Chip Implementation Center (CIC) also provides some support and auxiliary for CMOS technology.

DISCLOSURE

Part of this article has been previously published in 2006 IEEE Tenth International Symposium on Consumer Electronics (ISCE '06), Page(s):1–6 (DOI: 10.1109 /ISCE.2006.1689496).

REFERENCES

[1] Chua LO, Yang L. Cellular neural networks: theory. IEEE Trans Circ Syst 1988; 35(10): 1257-72.
[http://dx.doi.org/10.1109/31.7600]
[2] Chua LO, Yang L. Cellular neural networks: applications. IEEE Trans Circ Syst 1988; 35(10): 1273-90.
[http://dx.doi.org/10.1109/31.7601]
[3] Zurada JM. Introduction to artificial neural systems. St. Paul, USA: PWS Publishing Company 1992.
[4] Sheu BJ, Choi J. Neural information processing and VLSI. Morwell, MA, USA: Kluwer Academic Publishers 1995.
[http://dx.doi.org/10.1007/978-1-4615-2247-8]
[5] Mireia VS, Jankowski S, Szymanski Z. Cellular neural network learning using multilayer perceptron. In: 20th European Conference on Circuit Theory and Design; 2011 Aug 29-31;. Sweden: Linkoping 2011; pp. 214-7.
[6] Liu D, Michel AN. Cellular neural networks for associative memories. IEEE Trans Circuits Syst II: Analog Digit Signal Process 1993; 40(2): 119-21.
[http://dx.doi.org/10.1109/82.219843]
[7] Grassi G. A new approach to design cellular neural networks for associative memory. IEEE Trans Circuits Syst I 1997; 44: 835-8.
[http://dx.doi.org/10.1109/81.622988]
[8] Szolgay P, Szatmari I, Laszlo K. A fast fixed point learning method to implement associative memory on CNNs. IEEE Trans Circuits Syst I 1997; 44: 362-6.
[http://dx.doi.org/10.1109/81.563627]
[9] Grassi G, Sciascio ED. Learning algorithm for pattern classification using cellular neural network. Electron Lett 2000; 36(23): 1941-3.
[http://dx.doi.org/10.1049/el:20001368]
[10] Perfetti R, Costantini G. multiplierless digital learning algorithm for cellular neural networks. IEEE Trans Circ Syst I Fundam Theory Appl 2001; 48: 630-5.
[http://dx.doi.org/10.1109/81.922467]
[11] Paasio A, Halonen K, Porra V. CMOS implementation of associative memory using cellular neural network having adjustable template coefficients. In: IEEE International Symposium on Circuits and Systems; 1994 Jun May 30-Jun 2; London; England 1994; pp487-490.
[http://dx.doi.org/10.1109/ISCAS.1994.409632]
[12] Wu CY, Lan JF. CMOS current-mode neural associative memory design with on-chip learning. IEEE Trans Neural Netw 1996; 7(1): 167-81.
[http://dx.doi.org/10.1109/72.478401] [PMID: 18255567]
[13] Xiu C, Liu Y. Associative memory based on hysteretic neural network. In: International Conference on Control, Automation and Systems Engineering (CASE); 2011 July 30-31; Singapore: Malaysia 2011; pp1-3.
[http://dx.doi.org/10.1109/ICCASE.2011.5997623]
[14] Tanaka M, Aomori H, Nishio Y. Leaning theory of cellular neural networks based on covariance structural analysis. In: Proceeding 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA); 2010 Feb 3-5; Berkeley, CA:USA 2010; pp1-4.
[http://dx.doi.org/10.1109/CNNA.2010.5430326]
[15] Wu CY, Cheng CH. A learnable cellular neural network CNN) structure with ratio memory for image processing. IEEE Trans Circ Syst I Fundam Theory Appl 2002; 49(12): 1713-23.
[http://dx.doi.org/10.1109/TCSI.2002.805697]
[16] Wu CY, Lai JL. The improvement of pattern learning and recognition capability in ratio-memory cellular neural networks with non-discrete-type hebbian learning algorithm. In: IEEE Int Symp Circuits Syst Proc 2002; 1: 629-632
[17] Lai JL, Wu CY. Architectural design and analysis of learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) for nanoelectronics systems. IEEE Trans VLSI Syst 2004; 12(11): 1182-91.
[18] Lai JL, Wu CY. A learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) for associative memory applications. In: IEEE International Conference on Electronics Circuits and Systems, Procedings; 2004 Dec 13-15; Israel 2004; pp 183-6.
[19] Lai JL, Chen YH, Wang YL. Design a learnable self-feedback ratio-memory cellular nonlinear network (SRMCNN) for associative memory applications. In: IEEE International Symposium on Consumer Electronics ISCE; St Petersburg:. Russia 2006; pp. 1-6.
[20] Kim YS, Min KS. Synaptic weighting circuits for cellular neural networks. In: Proceeding 13th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA); 2012 Aug 29-31;. Turin: Italy 2012; pp. 1-6.
[http://dx.doi.org/10.1109/CNNA.2012.6331430]
[21] Tukel M, Yalcin ME. A new architecture for cellular neural network on reconfigurable hardware with an advance memory allocation method. In: Proceeding 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA); 2010 Feb 3-5; USA; pp 1-6.
[http://dx.doi.org/10.1109/CNNA.2010.5430316]
[22] Ayhan T, Yalcm ME. Randomly Reconfigurable Cellular Neural Network. In: 20th European Conference on Circuit Theory and Design, 2011-ECCTD; 2011 Aug 29-31; Linkoping: Sweden; pp 604-7.
[http://dx.doi.org/10.1109/ECCTD.2011.6043615]
[23] Amanatidis D, Dossis M. Use of behavioral synthesis to implement a cellular neural network for image processing applications. In: 2011 Panhellenic Conference on Informatics; 2011 Sep 30-Oct 2; Kastoria: Greece 2011; pp 183-7.
[http://dx.doi.org/10.1109/PCI.2011.11]
[24] Akiduki T, Zhong Z, Takashi I, Tetsuo M. Associative memories with multi-valued cellular neural networks and their application to disease diagnosis. 2009 IEEE International Conference on Systems, Man and Cybernetics; 2009. Oct 11-14; San Antonio, TX, USA. 2009; pp. 3824-9.
[http://dx.doi.org/10.1109/ICSMC.2009.5346618]
[25] Abdullah AA, Mohamaddiah H. Development of cellular neural network algorithm for detecting lung cancer symptoms. In: IEEE EMBS Conference Biomedical Engineering and Sciences (IECBES 2010); 2010 Nov 30- Dec 2;. Kuala Lumpur: Malaysia. 138-43.
[http://dx.doi.org/10.1109/IECBES.2010.5742216]
[26] A alik N, Cesue E, Tavsanoglu V. Handwritten character recognition application by using cellular neural network. In: Conference on Signal Processing and Communications Applications; April 24-26;. İstanbul, Turkey. 2013; pp. 1-4
[http://dx.doi.org/10.1109/SIU.2013.6531490]
[27] Duraisamy M, Jane FM. Cellular neural network based medical image segmentation using artifical BEE colony algorithm. In: International Conference Green Computing Communication and Electrical Engineering ICGCCEE; 2014 March 6-8; Coimbatore: India 2014; pp1-6.
[28] Hao D, Ji L, Zhou L. Rapid vehicle edge detection based on cellular neural network. In: 10th International Conference on Natural Computation; 2014 Aug 19-21;. Chengdu, China. 118-22.
[http://dx.doi.org/10.1109/ICNC.2014.6975820]
[29] Chedjou JC, Kyamakya K. A universal concept based on cellular neural networks for ultrafast and flexible solving of differential equations. IEEE Trans Neural Netw Learn Syst 2015; 26(4): 749-62.
[http://dx.doi.org/10.1109/TNNLS.2014.2323218] [PMID: 25794380]
Track Your Manuscript:


Endorsements



"Open access will revolutionize 21st century knowledge work and accelerate the diffusion of ideas and evidence that support just in time learning and the evolution of thinking in a number of disciplines."


Daniel Pesut
(Indiana University School of Nursing, USA)

"It is important that students and researchers from all over the world can have easy access to relevant, high-standard and timely scientific information. This is exactly what Open Access Journals provide and this is the reason why I support this endeavor."


Jacques Descotes
(Centre Antipoison-Centre de Pharmacovigilance, France)

"Publishing research articles is the key for future scientific progress. Open Access publishing is therefore of utmost importance for wider dissemination of information, and will help serving the best interest of the scientific community."


Patrice Talaga
(UCB S.A., Belgium)

"Open access journals are a novel concept in the medical literature. They offer accessible information to a wide variety of individuals, including physicians, medical students, clinical investigators, and the general public. They are an outstanding source of medical and scientific information."


Jeffrey M. Weinberg
(St. Luke's-Roosevelt Hospital Center, USA)

"Open access journals are extremely useful for graduate students, investigators and all other interested persons to read important scientific articles and subscribe scientific journals. Indeed, the research articles span a wide range of area and of high quality. This is specially a must for researchers belonging to institutions with limited library facility and funding to subscribe scientific journals."


Debomoy K. Lahiri
(Indiana University School of Medicine, USA)

"Open access journals represent a major break-through in publishing. They provide easy access to the latest research on a wide variety of issues. Relevant and timely articles are made available in a fraction of the time taken by more conventional publishers. Articles are of uniformly high quality and written by the world's leading authorities."


Robert Looney
(Naval Postgraduate School, USA)

"Open access journals have transformed the way scientific data is published and disseminated: particularly, whilst ensuring a high quality standard and transparency in the editorial process, they have increased the access to the scientific literature by those researchers that have limited library support or that are working on small budgets."


Richard Reithinger
(Westat, USA)

"Not only do open access journals greatly improve the access to high quality information for scientists in the developing world, it also provides extra exposure for our papers."


J. Ferwerda
(University of Oxford, UK)

"Open Access 'Chemistry' Journals allow the dissemination of knowledge at your finger tips without paying for the scientific content."


Sean L. Kitson
(Almac Sciences, Northern Ireland)

"In principle, all scientific journals should have open access, as should be science itself. Open access journals are very helpful for students, researchers and the general public including people from institutions which do not have library or cannot afford to subscribe scientific journals. The articles are high standard and cover a wide area."


Hubert Wolterbeek
(Delft University of Technology, The Netherlands)

"The widest possible diffusion of information is critical for the advancement of science. In this perspective, open access journals are instrumental in fostering researches and achievements."


Alessandro Laviano
(Sapienza - University of Rome, Italy)

"Open access journals are very useful for all scientists as they can have quick information in the different fields of science."


Philippe Hernigou
(Paris University, France)

"There are many scientists who can not afford the rather expensive subscriptions to scientific journals. Open access journals offer a good alternative for free access to good quality scientific information."


Fidel Toldrá
(Instituto de Agroquimica y Tecnologia de Alimentos, Spain)

"Open access journals have become a fundamental tool for students, researchers, patients and the general public. Many people from institutions which do not have library or cannot afford to subscribe scientific journals benefit of them on a daily basis. The articles are among the best and cover most scientific areas."


M. Bendandi
(University Clinic of Navarre, Spain)

"These journals provide researchers with a platform for rapid, open access scientific communication. The articles are of high quality and broad scope."


Peter Chiba
(University of Vienna, Austria)

"Open access journals are probably one of the most important contributions to promote and diffuse science worldwide."


Jaime Sampaio
(University of Trás-os-Montes e Alto Douro, Portugal)

"Open access journals make up a new and rather revolutionary way to scientific publication. This option opens several quite interesting possibilities to disseminate openly and freely new knowledge and even to facilitate interpersonal communication among scientists."


Eduardo A. Castro
(INIFTA, Argentina)

"Open access journals are freely available online throughout the world, for you to read, download, copy, distribute, and use. The articles published in the open access journals are high quality and cover a wide range of fields."


Kenji Hashimoto
(Chiba University, Japan)

"Open Access journals offer an innovative and efficient way of publication for academics and professionals in a wide range of disciplines. The papers published are of high quality after rigorous peer review and they are Indexed in: major international databases. I read Open Access journals to keep abreast of the recent development in my field of study."


Daniel Shek
(Chinese University of Hong Kong, Hong Kong)

"It is a modern trend for publishers to establish open access journals. Researchers, faculty members, and students will be greatly benefited by the new journals of Bentham Science Publishers Ltd. in this category."


Jih Ru Hwu
(National Central University, Taiwan)


Browse Contents




Webmaster Contact: info@benthamopen.net
Copyright © 2023 Bentham Open