LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Burles, Nathan John; O'Keefe, Simon; Austin, Jim; Hobson, Stephen John (2013)
Languages: English
Types: Article
Subjects: 1712, 2800, 1705, 1702
Despite their relative simplicity, Correlation Matrix Memories (CMMs) are an active area of research, as they are able to be integrated into more complex architectures such as the Associative Rule Chaining Architecture (ARCA) [1]. In this architecture, CMMs are used effectively in order to reduce the time complexity of a tree search from O(bd) to O(d)—where b is the branching factor and d is the depth of the tree. This paper introduces the Extended Neural Associative Memory Language (ENAMeL)—a domain specific language developed to ease development of applications using correlation matrix memories (CMMs). We discuss various considerations required while developing the language, and techniques used to reduce the memory requirements of CMM-based applications. Finally we show that the memory requirements of ARCA when using the ENAMeL interpreter compare favourably to our original results [1] run in MATLAB.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 1. Austin J, Hobson S, Burles N, O'Keefe S (2012) A rule chaining architecture using a correlation matrix memory. Int Conf on Artif Neural Netw 2012, pp 49-56. doi: 10.1007/978-3-642-33269-2 7
    • 2. Van Deursen A, Klint P, Visser J (2000) Domain-specific languages: an annotated bibliography. ACM Sigplan Not, pp 26-36. doi: 10.1145/352029.352035
    • 3. Brewer G (2008) Spiking cellular associative neural networks for pattern recognition. PhD Thesis, University of York
    • 4. Shanker KPS, Turner A, Sherly E, Austin J (2010) Sequential data mining using correlation matrix memory. Int Conf on Netw and Inf Technol, pp 470-472. doi: 10.1109/ICNIT.2010.5508469
    • 5. Spinellis D (2001) Notable design patterns for domain-specific languages. J Systems and Software, pp 91-99. doi: 10.1016/S0164-1212(00)00089-3
    • 6. Mernik M, Heering J, Sloane AM (2005) When and how to develop domain-specific languages. ACM Computing Surveys (CSUR), pp 316-344. doi: 10.1145/1118890.1118892
    • 7. Kosar T, Lopez PEM, Barrientos PA, Mernik M (2008) A preliminary study on various implementation approaches of domain-specific language. Information and Software Technology, pp 390-405. doi: 10.1016/j.infsof.2007.04.002
    • 8. Ladd DA, Ramming JC (1994) Two application languages in software production. USENIX Very High Level Languages Symposium Proceeding 1994, pp 169-178.
    • 9. Van Deursan A, Klint P (1998) Little languages: Little maintenance? J Software Maintenance, pp 75-92. doi: 10.1002/(SICI)1096-908X(199803/04)10:2¡75::AID-SMR168¿3.0.CO;2-5
    • 10. Kieburtz RB, McKinney L, Bell JM, Hook J, Kotov A, Lewis J, Oliva DP, Sheard T, Smith I, Walton L (1996) A software engineering experiment in software component generation. 18th Int Conf on Software Engineering, pp 542-552.
    • 11. Basu A (1997) A language-based approach to protocol construction. PhD Thesis, Cornell University
    • 12. Bruce D (1997) What makes a good domain-specific language? APOSTLE, and its approach to parallel discrete event simulation. ACM SIGPLAN Workshop on Domain-Specific Languages, pp 17-35.
    • 13. Menon V, Keshav P (2000) A case for source-level transformations in MATLAB. ACM SIGPLAN Notices, vol 35, pp 53-65. doi: 10.1145/331960.331972
    • 14. Krueger, CW (1992) Software reuse. ACM Computing Surveys (CSUR), vol 24, pp 131-183. doi: 10.1145/130844.130856
    • 15. Willshaw DJ, Buneman OP, Longuet-Higgins HC (1969) Non-holographic associative memory. Nature 222, pp 960-962. doi: 10.1038/222960a0
    • 16. Ritter H, Martinetz T, Schulten K, Barsky D, Tesch M, Kates R (1992) Neural computation and selforganizing maps: an introduction. Addison Wesley, Redwood City
    • 17. Austin J, Stonham TJ (1987) Distributed associative memory for use in scene analysis. Image and Vis Comput 5, pp 251-260. doi: 10.1016/0262-8856(87)90001-1
    • 18. Hobson S, Austin J (2009) Improved storage capacity in correlation matrix memories storing fixed weight codes. Int Conf on Artif Neural Netw 2009, pp 728-736. doi: 10.1007/978-3-642-04274-4 75
    • 19. Baum EB, Moody J, Wilczek F (1988) Internal representations for associative memory. Biol Cybern 59, pp 217-228. doi: 10.1007/BF00332910
    • 20. Orovas C, Austin J (1997) Cellular associative neural networks for image interpretation. Sixth Int Conf on Image Process and its Appl, pp 665-669. doi: 10.1049/cp:19970978
    • 21. Eisenstat SC, Gursky MC, Schultz MH, Sherman AH (1982) Yale sparse matrix package I: The symmetric codes. Int J for Numer Methods in Eng, pp 1145-1151. doi: 10.1002/nme.1620180804
    • 22. Russell SJ, Norvig P, Canny JF, Malik JM, Edwards DD (1995) Artificial intelligence: a modern approach. Prentice Hall, Englewood Cliffs
    • 23. Austin J (1992) Parallel distributed computation. Int Conf on Artif Neural Netw 1992.
    • % Create CMMs # Create CMMs M1=zeros(60); M1=Matrix 60 60 M2=zeros(60,3600); M2=Matrix 60 3600 % Create and train vectors # Create and train vectors T1=vector(60,[2,11,21,32,45]); T1=Pattern 60 1 10 20 31 44 T2=vector(60,[4,14,22,34,48]); T2=Pattern 60 3 13 21 33 47 ... ...
    • R1=vector(60,[1,15,25,32,47]); R1=Pattern 60 0 14 24 31 46 ... ...
    • M1=train(M1,T1,R1); M1=M1 v (T1:R1) ... ...
    • M2=traintp(M2,R1,T2,R1); M2=M2 v (R1:(T2:R1)) ... ...
    • % Recall vectors # Recall vectors % Initialise recall # Initialise recall O2=train(zeros(60),T1,R1); O2=T1:R1 % Iterate through each level of the tree # Iterate through each level of the tree % Level 1 # Level 1 O1=recall(M1,O2); O1=O2.M1|5 O2=reshape(sum(recall(M2,O1),2),60,60)'; O2=O1,M2,5|5 O2(O2<5)=0; # Level 2 O2(O2>0)=1; ...
    • % Level 2 ...
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article