The objection of this thesis is to study audio and video streaming on embedded devices. We will pick Microsoft Gadgeteer and Microsoft .NET Micro Framework technologies as a hardware and software platforms respectively. In order to perform this we have to have some modules. Let’s enumerate and talk about some of them. Of course we need display and audio output modules. Also we need Wi-Fi or/and Ethernet modules in order to connect to internet and receive stream (maybe even send some). We need mainboard for sure in order to write some code/logic there.
I decided to buy modules from GHI Electronics as because currently they are largest Microsoft Gadgeteer modules manufacturer. I will list the modules with prices (shipping to Armenia was US$52.55).
|Quantity||Product Model||Unit Price||Description|
|1||FEZ Spider Mainboard||US$99.95|
|1||USB Client DP Module||US$19.95||This is for software installation and debugging|
|6||Button Module||US$4.95||For volume and channel control|
|1||UC Battery 4xAA Module||US$19.95||Portable (computer independent) battery block|
|1||Wi-Fi RS21 Module||US$79.95|
|1||Ethernet ENC28 Module||US$19.95|
|1||Music Module||US$34.95||With separate processor|
|1||Hub AP5 Module||US$29.95||For connecting all 6 buttons to main board|
Gadget should connect to service via Internet and receive audio and video streaming. Little bit more about this. It should use WCF for .NET Micro Framework as a web service technology. We all know that WCF is easy for secure and reliable, operation-oriented web services. It sounds easy, but it is not as easy as it is in the PC. There was another option of solution, which was to use UDP (User Datagram Protocol) as because we do not have to receive double-checked results. If some bits were changes or missed in received signal it is OK for video and audio streaming. It is common practice to use UDP for that. The problem we should implement higher level protocol to be able to separate video, audio and other information streams like teletext or logo of the channel. Of course this is way much complicated staff and it is not going to be included in prototype, but as a vision of a product it should have a common set of contemporary television features.
Advantage of WCF over UDP is that there is implementation of WCF for .NET Micro Framework. Another advantage is that service can be extended and new versions of products can have a new sets of features and work perfectly with older versions of the product. It will be hard to do the same thing on UDP, and will be a flow for many bugs. I know that WCF and UDP are completely different things and I am not comparing them. This is a comparison of implementation of web service client on embedded device.
Another issue is software updates. The problem is that you can ship the embedded software with a hardware but you can’t update. It should be updated from the PC. In order to do that it need another PC-based program and separate service.
As because cloud is too much powerful and sophisticated, we do not have to care about limitations. We can do nearly anything in cloud-side. Main concern is to transfer all computation and logic in cloud side as much as possible. There are also issues of connection and compatibility. Usually as we want to make calculations on cloud side we should transfer extra amount of data via internet. Also we should keep compatibility via different versions of gadgets software. Again we return to choice between WCF and UDP. In case of WCF we can design cloud service in a way that the same service can serve to many different versions of gadgets at once. In case of UDP it is hard, there are not ready make solutions for us.
Although there are many different gadgets that perform video and audio streaming I found that there are really few gadget applications that are using sophisticated technologies that are used in desktop computer application development. The reasons are obvious. Computational power and power sources of embedded hardware are limited, so you have to use simple and limited technologies suitable for hardware.
So where the idea came from? As a .NET developer I was familiar with Microsoft .NET Framework and its compact edition (.NET Compact Framework) which is for mobile and hand-held devices. I found that there is .NET Micro Framework which is an edition of .NET Framework for small gadget-like devices. Before finding this I was thinking that .NET is mainly for providing software solutions for big enterprises, because it enables development of cloud computing software, Windows services, desktop applications, etc. .NET Micro Framework has implementation of Microsoft WCF (Windows Communication Foundation) technologies, which I am going to use in my gadget. WCF which enabled Service-oriented architecture (SOA) by using Simple Object Access Protocol (SOAP), mainly used for financial transaction where correctness of transaction is very important. In other cases where correctness and security is not that much important, Representational state transfer (REST) is used instead of SOA. I will use WCF in my application in order to show that you can use sophisticated technologies for small gadgets as well.
Designing embedded hardware
In order to explain the concepts of embedded hardware and software system's architecture first we will discuss their layers (Catsoulis & Orwant, 2002). Layers of complex embedded computers are similar to desktop computers. Let’s enumerate from bottom to top: Hardware, Firmware, Operating System and Application(s). Note that in desktop computers it is usually multitasking, but embedded systems, depending on the complexity, can be single application or multiple applications. Depending on the complexity you have to choose processor precisely. Primary factor to consider is the instruction set, also known as opcodes or machine codes. Another factor to consider is the type of instruction set like CISC or RISC. Mandatory components of Basic computer systems are processor, memory and I/O devices like disks, display, printer, keyboard and mouse. Each component is a combination of conductors, insulators, or semiconductors, which means that voltage and current are also very important (Catsoulis & Orwant, 2002).
In order for two devices to connect with each other (send and receive bytes) they should know the address of each other, but in order to allow multiple applications so that devices connect to each other, there are serial ports. Each application can use its port. The most common user port is Universal Asynchronous Receiver Transmitter (UART) (Catsoulis & Orwant, 2002). UARTs are also sometimes called Asynchronous Communication Interface Adapters (ACIAs) (Catsoulis & Orwant, 2002). Asynchronous means that there is no clock involved. Sending and receiving are not coherent with each other.
Universal Serial Bus (USB) is a high-speed bus that allows up to 127 devices to be connected (Catsoulis & Orwant, 2002). It is designed by Digital Equipment Corporation, IBM, Intel, Microsoft, NEC and Nortel. It is a way to connect devices to desktop computers and install embedded software on them.
Controller Area Network (CAN) is a connecting working block like the processor of a mother board. It can be considered as a different device in network.
Ethernet is probably the most well-known way of connecting your device to a network. Embedded hardware usually uses Ethernet network connection.
Hardware deals with analog signals, but software understands only digital ones. In this purpose, in a hardware level, signals are converted to digital. Let us say that when there is no current it means V = 0 where V is current, and maximal voltage of current is M. We can say that if V < M/2, in digital terms it means that its value is equal to 0, otherwise it equals to 1. Also we have to have notion of machine clock, because in hardware nothing happens instantaneously, circuits need acceleration which takes a time.
A component-based model integrated framework for embedded software
Embedded system design is an error-prone and time-consuming process (Chen, Xie, & Shi, 2005). Embedded systems are usually designed for specific purposes. Hardware are designed to be able to run specific software systems and software systems are designed to run specific applications, which are designed for specific, beforehand known task or tasks. These are called heterogeneous systems. Digital televisions are designed to connect with broadcaster service, mobile terminals to connect with payment systems, and so on. The problem is to find the best way of designing interconnected topologies, communication protocols, and communication channels. Communication channel is a way of transferring data. In the World Wide Web, the communication channel is TCP/IP. Another real-world example is sounds in a human language. Because being able to make a sound does not mean speaking, we need higher layers. Communication protocol is a manifestation of rules to transfer information. In other words, it provides more complex structure of transferring a data. We can say that data transferred through communication protocols is information. Equivalent to the World Wide Web communication protocol, there is the Hypertext Transfer Protocol. In a human world, it is like being able to construct words out of sounds.
Formalizing software architectures for embedded systems
Currently, each embedded system uses its own model of programming. Even languages are very different. Usually each gadget is designed for a specific task. That is why systems are designed task-centric. This paper (Binns & Vestal, 2001) offers a user model-centric approach instead, which means system architects have to design systems based on models, not a singular task. Actions and operations should be generalized and connections should be abstract. It suggests to use Usage Scenario for basic scenario description, Automated Model Assembler for scenario combination and Model Compiler which will combine models, validate, and compile to Embedded Software. It also offers a useful technique for constructing programming languages for Embedded Systems.
Towards a trustworthy, lightweight cloud computing framework for embedded systems
We already have fully developed infrastructures of Cloud Computing for desktop and server computers. This technology is evolving for mobile devices also, but what about embedded devices? If we succeed in making cloud computing available for embedding devices, we will enhance the capability of lightweight devices to super computer levels and we will eliminate barriers between desktop computers and embedded devices (Dietrich & Winter, 2011). There still remains the question of security. In order to provide secure data transfer, software uses Cryptography. In embedded devices we have limited computational power, which means limited ability to execute complex cryptographic algorithms. Another question that this paper discusses is Energy Efficiency. In a small device we have very limited sources of energy and very limited ways of spending energy. This paper defines and shows the method of Embedded Trusted Computing.
Robot as a service in cloud computing
Service-oriented approach is used in many software solutions for a long time. Since the evolution of cloud computing these two techniques are used together (Yinong, Zhihui, & Marcos, 2010). Especially for last five years this paradigm seems to be in use on a large scale. This research paper includes design, implementation, and evaluations of Robot as a Service (RaaS) unit. Robots were mainly consumers of some services. Now we can think about them as service providers. Implementations follow common service standards, development platforms, and execution infrastructure. Implementations are available for Windows and Linux operating systems. They supports Atom and Core 2 Duo hardware architecture, and Microsoft Visual Programming Language (VPL).
Statement of the problem
Primary goal of my work is to build a gadget that will use high-end technologies that are used by enterprise applications. Another criterion is generalization. Although it is called TIKSN TV it would not be just a television gadget, it should support presentations broadcast and other applications relating to video and audio streaming.
Hardware will satisfy manifestation of Microsoft .NET Gadgeteer open source rapid prototyping platform (Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction, 2013). Platform is designed and manifested by Microsoft Research, division of Microsoft Corporation. Actual hardware modules are built by various companies like GHI Electronics, Sytech design and Seeed Studio.
With this technology one can design hardware without electronics background. In other words it makes available for computer science people to design hardware which is the first step of building embedded system-like gadgets. Main modules are going to be single-board microcontroller, display, Wi-Fi and/or Ethernet, music module (with separate processor).
There are variety of Gadgeteer modules, but sometimes you need specific one for solving specific problem. Gadgeteer enables hardware engineers to design new modules. One can easily design Gadgeteer module for instance to connect it to specific model of car to control or diagnostic it, to measure temperature, height, speed, or other implications.
Choosing a software platform
When one talks about embedded software usually there are two options: Windows Embedded and Raspberry Pi (powered by Linux based operating systems). I want to talk about the first one. Windows Embedded supports a wide range of hardware starting from small devices to huge rigs. One of the criteria of choosing .NET Micro Framework is because my device needs something really small end energy saving. Another reason is that there are too many solutions for Windows Embedded because it powers numerous custom purpose computer devices. It even powers car manufacturing robots. There are no limitations like problems of running cryptographic algorithms or limiting network traffic. Indeed you cannot install Windows Embedded in such a small devices.
Application should connect with cloud service and retrieve video and audio information through WCF service. However it should do it wisely because network traffic is limited. Also because it should be flexible enough, in case of presentation broadcasting video signal cannot be changed for a long time, there is no need to retrieve and render the same picture all the time. The same can happen with audio. This is an open issue yet but not the hardest one.
The simplest type of image is a bitmap image which is just and array of colored pixels. If we consider that one pixel is 4 byte then, image with 1000 pixel width and 1000 pixel height contains 4 million byte information. From another point of view an image where all pixels are black contains 4 byte information indeed. So there is no need to transfer 4 million bytes to tell a 4 byte information (Weinberger, Seroussi, & Sapiro, 1996). We already mentioned that heavy algorithms are costly to run in devices like that. This stays as an open issue as well.
There is a similar issue with audio too. If we assemble our hardware in a way that it contains only audio output module we should think about audio compression as well (Ghido & Tù, 2008). However I decided to delegate this issue to hardware level. We will use music output module instead of audio output module. Music module has its own processor and build-in firmware which is able to decompress streaming signal into non-compressed audio output.
Separation of video and audio signal
Computer programmers familiar with low-level programming languages will think that there is going to be an issue of separation of video and audio signal. Yes there will be if we use ordinary sockets and try to construct protocol on top of it. We can separate signals in packages. Each package will be marked as audio and video. But what about some custom actions like retrieving logo of current channel or something like teletext. Adding features like that can mess the whole code. I mentioned in my introduction that I am going to use WCF. WCF actually is not just a data retrieval technique, it calls remote operations. Client requests methods that will be executed in cloud and the results will be returned to the client. We can implement getting audio and video signals as separate operations. It is possible to add as many features as needed without damaging already existing ones.
There are two things to consider about connection. It should support reconnection. If connection is lost for a short period of time it should try to connect again for couple of times. Also it should save session specific information, or keep that information separately. Another thing to consider is fluctuation of network speed. This can be solved as in embedded software as in cloud service. Currently I think that it is better to delegate this issue to cloud software. Which means that we should consider showing low quality images also.
Cloud service first should try to connect new devices, which is called listening. After connected with one, it should create a separate process thread which will serve that device.
This will be done by WCF. Let me talk about the issue.
Let us imagine that cloud service should broadcast CNN international television channel. I am not sure that CNN has a service or API (application programmable interface), but this is just an example. Implementing WCF service means implementation of operation methods just like if it is going to support only one device. If we try to fetch video stream from CNN and decode in video or audio retrieval methods we fetch the same information repeatedly. However cloud service in this context should appear as a single client for CNN servers. One method is to get and catch stream separately. In this case we have to answer a question how to distribute different external channels to multiple internal channels. Another application can be streaming video files repeatedly. This issue similar to the previous one and I would not discuss it separately.
In this experiment I want to demonstrate that video playback is not pleasant for human eye for this particular display module. Programs first creates a list of colors. Then it fills whole screen with those colors one by one continuously. After running the program on the mainboard it is obvious how one color changes another. One do not want to watch news report or movie in a display like this.
Video frequency measurement
I done a simple test. Test should print time on the screen at 60 Hz frequency. Counter variable set to 0 initially. Each time after displaying a current time it increases value of counter for one and prints out to debug stream. I set up a stop watch a collect and information in order to the actual frequency of display module. After 7 minutes 40 seconds and 46 milliseconds counter was 1873. Results showed that display module maximum frequency was approximately 4 Hz. One can’t have a video streaming in 4 Hz. However if you want to show some status information, image slide show or other things, this display is OK for that.
Another experiment for checking clock of mainboard. Software logic of this experiment’s software is to just simply print out current time and date (which is on hardware) on screen. Then by measuring the offset of timer period difference on PC and on mainboard I found out that both had 12 minute time offset. Time on hardware is not correct but time change is correct. This experiment done to show that the experiments based on clock offset are reliable. Also this means that we can use time period based calculations and use it in embedded software design. For instance we can show that some event spent some amount of time. In order to specify exact date and time when the event is occurred we have to add time difference between PC and gadget. However one can’t be sure that each gadget have same offset of actual time.
As it is already been mentioned, we using music module instead simple audio module. The difference is that music module has its own processor and capable of decompressing MP3, OGG and other formats before playing. Audio module just plays audio signal. It requires signal to be already decompressed. Any compressed formats would not play via audio module.
So our experiment is to test music module. I embedded music resource file in sample program. After deploying it in hardware it plays successfully. However it is worthy to mention that music file quality and size were low.
Presentation gadget will provide a way of viewing presentations for audiences and receiving questions from the audience. It have to have two different gadgets and cloud services. Most of its parts will be covered in this project.
You can think about Conference gadget as gadget specialized in Skype video calls. Again most of it already will be covered in my work. The main difference is that it should support a microphone and solve issues related to that.
This thesis shows that it is possible to use technologies like WCF to design and build small embedded systems. This can be used to not only to create different useful and entertaining gadgets but also security systems, smart houses, remote controlled cars and mood sensitive dresses (Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments, 2013).
We showed in this paper that it is possible to build embedded system using state of the art technologies for modern personal computers. However because of issues like deployment, maintenance, cost, efficiency, computational power, security and development time it is recommended to have a mobile application for solving problems like audio and video streaming. Gadgeteer hardware are recommended for creating a control panels or entire system for projects like home security system, or diagnostic tools, measuring tools.
Binns, P., & Vestal, S. (2001). Formalizing Software Architectures for Embedded Systems. Proceedings of the First International Workshop on Embedded Software (pp. 451-468). London, UK: Springer-Verlag. Catsoulis, J., & Orwant, J. (2002). Designing Embedded Hardware. Sebastopol, CA, USA: O'Reilly & Associates, Inc. Chen, W., Xie, C., & Shi, J. (2005). A component-based model integrated framework for embedded software. Proceedings of the First international conference on Embedded Software and Systems (pp. 563-569). Berlin, Heidelberg: Springer-Verlag. Dietrich, K., & Winter, J. (2011). Towards a Trustworthy, Lightweight Cloud Computing Framework for Embedded Systems. Pittsburgh, PA, USA: Springer Berlin Heidelberg. Ghido, F., & Tù, I. (2008). BENCHMARKING OF COMPRESSION AND SPEED PERFORMANCE FOR LOSSLESS AUDIO COMPRESSION ALGORITHMS. Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments. (2013). PETRA '13. New York, NY, USA: ACM. Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction. (2013). TEI '13. New York, NY, USA: ACM. Weinberger, M. J., Seroussi, G., & Sapiro, G. (1996). Loco-I: A Low Complexity, Context-Based, Lossless Image Compression Algorithm. Data Compression Conference (pp. 140-149). Palo Alto, CA, US: Hewlett-Packard Laboratories. Yinong, C., Zhihui, D., & Marcos, G.-A. (2010). Robot as a Service in Cloud Computing. Service Oriented System Engineering, IEEE International Symposium on. Nanjing, China: IEEE Computer Society.