Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Murray, D; Koziniec, T; Lee, K; Dixon, M (2012)
Publisher: IEEE Computer Society
Languages: English
Types: Part of book or chapter of book
Ethernet data rates have increased many magnitudes since standardisation in 1982. Despite these continual data rates increases, the 1500 byte Maximum Transmission Unit (MTU) of Ethernet remains unchanged. Experiments with varying latencies, loss rates and transaction lengths are performed to investigate the potential benefits of Jumboframes on the Internet. This study reveals that large MTUs offer throughputs much larger than a simplistic overhead analysis might suggest. The reasons for these higher throughputs are explored and discussed.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] P. Sayer, “Market Overview: US Ethernet Services,” Forrester Research, 2010.
    • [2] P. Dykstra, “Gigabit Ethernet Jumbo Frames, And why you should care,” http://sd.wareonearth.com/p˜hil/jumbo.html, 1999.
    • [3] A. Foong, T. Huff, H. Hum, J. Patwardhan, and G. Regnier, “TCP Performance Re-visited,” in In IEEE International Symposium on Performance of Systems and Software, 2003, pp. 70-79.
    • [4] W. Rutherford, L. Jorgenson, M. Siegert, P. V. Epp, and L. Liu, “16000-64000 B pMTU experiments with simulation: The case for super jumbo frames at Supercomputing '05,” Optical Switching and Networking, vol. 4, no. 2, pp. 121 - 130, 2007. [Online]. Available: http://www.sciencedirect.com/science/article/B7GX5- 4MD46C3-2/2/d52a67bbbed98b275d31fda645473ef5
    • [5] N. Garcia, M. Freire, and P. Monteiro, “The Ethernet Frame Payload Size and Its Effect on IPv4 and IPv6 Traffic,” in Information Networking, 2008. ICOIN 2008. International Conference on, jan. 2008, pp. 1 -5.
    • [6] S. Makineni and R. Iyer, “Architectural Characterization of TCP/IP Packet Processing on the Pentium; M Microprocessor,” in Proceedings of the 10th International Symposium on High Performance Computer Architecture, ser. HPCA '04. Washington, DC, USA: IEEE Computer Society, 2004, pp. 152-. [Online]. Available: http://dx.doi.org/10.1109/HPCA.2004.10024
    • [7] L. Foundation, “TOE,” www.linuxfoundation.org/collaborate/workgroups/ networking/toe, 2009.
    • [8] R. Hughes-Jones, P. Clarke, and S. Dallison, “Performance of 1 and 10 Gigabit Ethernet cards with server quality motherboards,” Future Gener. Comput. Syst., vol. 21, pp. 469-488, April 2005. [Online]. Available: http://dx.doi.org/10.1016/j.future.2004.10.002
    • [9] R. Stevens, TCP/IP illustrated (vol. 1): the protocols. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1993.
    • [10] Chelsio, “Ethernet Jumbo Frames, The Good, the Bad and the Ugly,” www.chelsio.com/assetlibrary/solutions/Chelsio Jumbo Enet Frames.pdf, 2011.
    • [11] M. Mathis, “Arguments about mtu,” http://staff.psc.edu/mathis/MTU/arguments.html, 2011.
    • [12] R. Jain, “Error characteristics of fiber distributed data interface (FDDI) ,” Communications, IEEE Transactions on, vol. 38, no. 8, pp. 1244 -1252, aug 1990.
    • [13] C. Kent and J. Mogul, “Fragmentation Considered Harmful,” in In ACM SIGCOMM, 1987, pp. 390-401.
    • [14] M. Mathis and J. Heffner, “Packetization Layer Path MTU Discovery,” RFC 4821, 2007.
    • [15] M. Ravot, Y. Xia, D. Nae, X. Su, H. Newman, and J. Bunn, “A Practical Approach to TCP High Speed WAN Data Transfers,” in Proceedings of PATHNets 2004. San Jose, CA, USA: IEEE, 2004.
    • [16] M. Mathis, R. Reddy, and J. Mahdavi, “Enabling High Performance Data Transfers System Specific Notes for System Administrators,” http://www.psc.edu/networking/projects/tcptune/, 2011.
    • [17] M. Hassan and R. Jain, High Performance TCP/IP Networking: Concepts, Issues, and Solutions. Prentice-Hall, 2003.
    • [18] W. Feng, J. Hurwitz, H. Newman, S. Ravot, R. Cottrell, O. Martin, F. Coccetti, C. Jin, X. Wei, and S. Low, “Optimizing 10-Gigabit Ethernet for Networks of Workstations, Clusters, and Grids: A Case Study,” in Proceedings of the 2003 ACM/IEEE conference on Supercomputing, ser. SC '03. New York, NY, USA: ACM, 2003, pp. 50-. [Online]. Available: http://doi.acm.org/10.1145/1048935.1050200
    • [19] S. Radhakrishnan, Y. Cheng, J. Chu, A. Jain, and B. Raghavan, “Tcp fast open,” in Proceedings of the 7th International Conference on emerging Networking EXperiments and Technologies (CoNEXT), 2011.
    • [20] Michael and Scharf, “Comparison of end-to-end and networksupported fast startup congestion control schemes,” Computer Networks, vol. 55, no. 8, pp. 1921 - 1940, 2011. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1389128611000491
    • [21] M. Mathis, J. Semke, J. Mahdavi, and T. Ott, “The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm,” SIGCOMM Comput. Commun. Rev., vol. 27, pp. 67-82, July 1997. [Online]. Available: http://doi.acm.org/10.1145/263932.264023
    • [22] M. Allman, V. Paxson, and E. Blanton, “TCP Congestion Control,” RFC 5681, 2009.
    • [23] J. Chu, N. Dukkipati, Y. Cheng, and M. Mathis, “Increasing TCP's Initial Window,” IETF Draft, 2011.
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.

Share - Bookmark

Cite this article