Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Guliyev , Namig; Ismailov , Vugar (2016)
Publisher: Massachusetts Institute of Technology Press (MIT Press)
Languages: English
Types: Article
Subjects: Sigmoidal functions, λ-monotonicity, Continued fractions, ACM : I.: Computing Methodologies/I.2: ARTIFICIAL INTELLIGENCE/I.2.6: Learning/I.2.6.2: Connectionism and neural nets, 41A30, 65D15, 92B20, ACM : I.: Computing Methodologies/I.5: PATTERN RECOGNITION/I.5.1: Models/I.5.1.3: Neural nets, [ MATH.MATH-NA ] Mathematics [math]/Numerical Analysis [math.NA], Bernstein polynomials, [ INFO.INFO-NE ] Computer Science [cs]/Neural and Evolutionary Computing [cs.NE], Calkin--Wilf sequence, [ INFO.INFO-IT ] Computer Science [cs]/Information Theory [cs.IT], Smooth transition function, Computer Science - Information Theory, [ MATH.MATH-IT ] Mathematics [math]/Information Theory [math.IT], Computer Science - Neural and Evolutionary Computing, ACM : C.: Computer Systems Organization/C.1: PROCESSOR ARCHITECTURES/C.1.3: Other Architecture Styles/C.1.3.7: Neural nets, Mathematics - Numerical Analysis, ACM : F.: Theory of Computation/F.1: COMPUTATION BY ABSTRACT DEVICES/F.1.1: Models of Computation/F.1.1.4: Self-modifying machines (e.g., neural networks)
The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of $\mathbb{R}$ by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function $\sigma$ providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of $\sigma$ at any reasonable point of the real axis.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] A. R. Barron, Universal approximation bounds for superposition of a sigmoidal function, IEEE Trans. Information Theory 39 (1993), 930{945.
    • [2] N. Calkin and H. S. Wilf, Recounting the rationals, Amer. Math. Monthly 107 (2000), 360{ 367.
    • [3] F. Cao, T. Xie, and Z. Xu, The estimate for approximation error of neural networks: A constructive approach, Neurocomputing 71 (2008), no. 4{6, 626{630.
    • [4] S. M. Carroll and B. W. Dickinson, Construction of neural nets using the Radon transform, in Proceedings of the IEEE 1989 International Joint Conference on Neural Networks, 1989, Vol. 1, IEEE, New York, pp. 607{611.
    • [5] T. Chen, H. Chen, and R. Liu, A constructive proof of Cybenko's approximation theorem and its extensions, In Computing Science and Statistics, Springer-Verlag, 1992, pp. 163{168.
    • [6] C. K. Chui and X. Li, Approximation by ridge functions and neural networks with one hidden layer, J. Approx. Theory 70 (1992), 131{141.
    • [7] D. Costarelli and R. Spigler, Constructive approximation by superposition of sigmoidal functions, Anal. Theory Appl. 29 (2013), 169{196.
    • [8] N. E. Cotter, The Stone{Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks 1 (1990), 290{295.
    • [9] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control, Signals, and Systems 2 (1989), 303{314.
    • [10] K. Funahashi, On the approximate realization of continuous mapping by neural networks, Neural Networks 2 (1989), 183{192.
    • [11] A. R. Gallant and H. White, There exists a neural network that does not make avoidable mistakes, in Proceedings of the IEEE 1988 International Conference on Neural Networks, 1988, Vol. 1, IEEE, New York, pp. 657{664.
    • [12] N. Hahm and B.I. Hong, Extension of localized approximation by neural networks, Bull. Austral. Math. Soc. 59 (1999), 121{131.
    • [13] K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks 4 (1991), 251{257.
    • [14] K. Hornik, M. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks 2 (1989), 359{366.
    • [15] B. Irie and S. Miyake, Capability of three-layered perceptrons, in Proceedings of the IEEE 1988 International Conference on Neural Networks, 1988, Vol. 1, IEEE, New York, pp. 641{ 648.
    • [16] V. E. Ismailov, Approximation by neural networks with weights varying on a nite set of directions, J. Math. Anal. Appl. 389 (2012), no. 1, 72{83.
    • [17] V. E. Ismailov, On the approximation by neural networks with bounded number of neurons in hidden layers, J. Math. Anal. Appl. 417 (2014), no. 2, 963{969.
    • [18] Y. Ito, Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory, Neural Networks 4 (1991), 385{394.
    • [19] L. K. Jones, Constructive approximations for neural networks by sigmoidal functions, Proc. IEEE 78 (1990), 1586{1589. Correction and addition, Proc. IEEE 79 (1991), 243.
    • [20] V. Kurkova, Kolmogorov's theorem is relevant, Neural Comput. 3 (1991), 617{622.
    • [21] V. Kurkova, Kolmogorov's theorem and multilayer neural networks, Neural Networks 5 (1992), 501{506.
    • [22] M. Leshno, V. Ya. Lin, A. Pinkus and S. Schocken, Multilayer feedforward networks with a non-polynomial activation function can approximate any function, Neural Networks 6 (1993), 861{867.
    • [23] V. Maiorov and A. Pinkus, Lower bounds for approximation by MLP neural networks, Neurocomputing 25 (1999), 81{91.
    • [24] H. N. Mhaskar and C. A. Micchelli, Approximation by superposition of a sigmoidal function and radial basis functions, Adv. Appl. Math. 13 (1992), 350{373.
    • [25] S. Northshield, Stern's diatomic sequence 0; 1; 1; 2; 1; 3; 2; 3; 1; 4; : : : , Amer. Math. Monthly 117 (2010), no. 7, 581{598.
    • [26] A. Pinkus, Approximation theory of the MLP model in neural networks, Acta Numerica 8 (1999), 143{195.
    • [27] P. C. Sikkema, Der Wert einiger Konstanten in der Theorie der Approximation mit BernsteinPolynomen, Numer. Math. 3 (1961), 107{116.
    • [28] W. A. Stein et al., Sage Mathematics Software (Version 6.10), The Sage Developers, 2015, http://www.sagemath.org.
    • [29] M. A. Stern, U ber eine zahlentheoretische Funktion, J. Reine Agnew. Math. 55 (1858) 193{ 220.
    • [30] M. Stinchcombe and H. White, Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights, in Proceedings of the IEEE 1990 International Joint Conference on Neural Networks, 1990, Vol. 3, IEEE, New York, 7{16. Institute of Mathematics and Mechanics, Azerbaijan National Academy of Sciences, 9 B. Vahabzadeh str., AZ1141, Baku, Azerbaijan. E-mail address: Institute of Mathematics and Mechanics, Azerbaijan National Academy of Sciences, 9 B. Vahabzadeh str., AZ1141, Baku, Azerbaijan. E-mail address:
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.