Credits: Designnews

Credits: Designnews

Every time a new embedded software project starts, the air is electrified with energy, hope, and excitement. For engineers, there are few things on Earth as exciting as creating a new project and bringing together new and innovative ideas that have the potential to change the world. Unfortunately, shortly after project kick-off, engineers can quickly lose their passion as they are forced to dig into the nuts and bolts by once again writing microcontroller drivers, trying to integrate real-time operating systems (RTOSes) and third-party components. These repetitive project tasks can consume time, energy, and dampen product innovation. An interesting solution is beginning to arrive that could help developers — embedded system platforms.

An embedded system platform contains all the building blocks that a developer needs to quickly get a microcontroller up and running in a short time period and direct their focus on the product. Too much time and money is wasted just trying to get a microcontrollers software up and running. The idea behind the platform is that drivers, frameworks, libraries, schedulers, and sometimes even application code are already provided so that developers can focus on their product features rather than the mundane and repetitive software tasks.

 

 

ESC logo

HAL Design for MCUs. The speed at which a developer is expected to write software often results in device drivers that are difficult to understand, hard to maintain, and difficult to port. Join Jacob Beningo atESC Silicon Valley , Dec. 6-8, 2016 in San Jose, Calif., as he describes methods and techniques that can be used to develop a reusable hardware abstraction layer (HAL) that is easy to maintain and use across multiple projects and platforms.Register here for the event, hosted by Design News ’ parent company, UBM.

Embedded software platforms provide developers with an opportunity to shave months from the development cycle by leveraging existing HALs and APIs. Becoming a microcontroller expert in all the little nuances is no longer required. HALs and APIs abstract the lower level hardware and make development similar to writing software on a PC, although developers still need to keep in mind that they are working in a resource-constrained environment. Make a simple call to that UART HAL and serial data can be transmitting in minutes rather than weeks.

There are many advantages to platform development that developers should keep in mind:

  • Leveraging existing software to prevent reinventing the wheel
  • Faster time to market
  • Potential to decrease overall project costs
  • Increased firmware robustness

 

There are certainly a few potential issues that developers should be concerned with, as well:

  • Platform licensing models
  • Cost to change platforms if direction changes in the future
  • Becoming dependent upon a third-party platform
  • Having too much free time due to smoothly moving projects

 

The truth is that embedded system development has become increasing complex in the past decade as microcontrollers have increased exponentially in their capability. That capability has been driven by mobile technologies and the need for more connectivity in our devices. The typical development time line has stayed roughly the same. With more to do, smaller budgets, and

the same time to do it in, developers need to become smarter and find new methods and ways to develop their systems without compromising robustness, integrity, and features.

One possible solution is to use embedded platforms such as the Renesas Synergy Platform, Electric Imp, and Microchip Harmony, among others. (These are the platforms I’ve had the opportunity to explore with so far.) Platforms can vary from extending the traditional developers capabilities through radically transformational and different development techniques. In either case, given time, budget, and feature sets, it is very obvious that building embedded systems from the ground will very soon no longer be an option.

 

Credits: Htmlgoodies

Credits: Htmlgoodies

 

During the development of a website, developers must use a variety of tools. Software to create the site, edit graphics, transfer files, SSH or telnet into your server–this article will cover five of the best.

NoteTabPro

NoteTabPro is a text editor on steroids. It supports HTML, Perl, LaTeX, ASP, Java, Javascript, PHP, AutoLISP, SQL, COBOL, 4DOS, JCL, VHDL, ADO, VBScript, VRML and more. It features a tabbed interface, as well as “clipbook” libraries for HTML, JavaScript and CSS that make the task of editing a web page a lot easier. They have a “light” version that is free, a full “Pro” version that retails for$29.95, and a Standard version for $19.95.

WS_FTP Pro

Obviously you will need a tool that enables you to move files from your local machine to your web server. IPSwitch’s WS_FTP Pro is an industry standard that has been used by web developers for years, with over 40 million users worldwide. It allows you to transfer files over FTP, SSL, SSH and HTTP/S transfer protocols. It is also secure, with 256-bit AES encryption, FIPS 140-2 validated cryptography, OpenPGP file encryption, and file integrity validation up to SHA-512. This isn’t your father’s File Transfer Protocol (FTP) software. It retails for $54.95, or $89.95 with a one year support agreement, and also they have a free demo version available for download.

PuTTY

PuTTY is a free implementation of Telnet and SSH which can be used on Windows and Unix platforms, and it includes an xterm terminal emulator. It supports standard telnet sessions, SSH-2 and SSH-1, as well as local echo. The software isn’t as full featured as some commercial SSH tools, but it will get the job done–and well. If you need to telnet or SSH into your web server, this is the tool to use.

Dreamweaver

Adobe’s Dreamweaver CS5 is a full-featured WYSIWYG editor that allows developers to design visually as well as directly within the code. It supports PHP-based CMSes such as Drupal, WordPress and Joomla, and enables developers to create websites using HTML 5. It also features CSS Starter Layouts to get you started, and is integrated with Adobe BrowserLab, which allows developers to preview dynamic web pages and local content using multiple views and diagnostic and comparison tools. It retails for $399, and a demo version is available for download for free.

PaintShop Pro

Corel’s PaintShop Pro has been a web developer’s friend since Jasc Software owned it over six years ago. It allows you to import, edit and share your images. It enables those of us without graphic skills to make quick fixes to images via its Express Lab feature. Creating GIF images with transparent backgrounds is a snap. It also allows users to upload images directly to Facebook, YouTube and Flickr. PaintShop Pro retails for $99.99, and a free trial version is available for download. As you can see, the software you choose can make your life easier, and enhance the development process from start to finish. If you know of other tools that belong in every web developer’s toolbox, let us know so we can spread the word!

Credits: Cio-today

Credits: Cio-today

As Microsoft continues to roll out new preview builds of its next Windows 10 update, it is also working to make those releases easier and more efficient to download. The new Unified Update Platform (UUP) will become available for developers in stages, with the first version — for Windows Mobile — announced yesterday.

In addition to the Unified Update Platform, Microsoft yesterday also released an Insider Preview Build for the next major refresh of Windows 10. Set for general release in early 2017, the so-called “Creators Update” of Windows 10 will place a heavy emphasis on 3D imaging, painting and other creativity tools.

The new Windows Build 14959 for Mobile and PC gives developers in the Insider Fast ring a chance to take a number of new features for a test run, and also fixes known issues with how the previous build managed applications, displays and settings. Microsoft is encouraging developers to install the latest build ahead of a problem-finding “Bug Bash” set to start next Monday.

‘Differential Downloads’ for Efficiency

Up until now, when Microsoft released a major update of its Windows operating system, users have had to download the entire update package, which could be both time-consuming and resource-intensive. With UUP, however, the download size of updates will be reduced, with more of the heavy lifting handled on Microsoft’s cloud rather than on the side of the customer’s device.

“We have converged technologies in our build and publishing systems to enable differential downloads for all devices built on the Mobile and PC OS,” Bill Karagounis, director of program management for the Windows Insider Program and OS Fundamentals, wrote yesterday on the Windows blog. “A differential download package contains only the changes that have been made since the last time you updated your device, rather than a full build.”

With updates on PCs, for example, users can expect to see the download size for major Windows updates reduced by around 35 percent, Karagounis said. Users updating on mobile devices, meanwhile, will see more of the processing handled by the Windows Update service, which will help improve update speeds and device battery life.

The rollout of UUP will also streamline updates on Windows Mobile so users do not have to install more than one build at a time to get the latest version of the OS.

“On your phone, we would sometimes require you to install in two-hops (updates) to get current,” Karagounis noted. “With UUP, we now have logic in the client that can automatically fallback to what we call a ‘canonical’ build, allowing you to update your phone in one-hop, just like the PC.”

Developers Now Testing Paint 3D

With the latest Insider Preview build released yesterday, developers will be able to test such coming Windows features as Paint 3D, part of the Creators Update arriving next year. Build 14959 also adds new display scaling capabilities, fixes previous issues with automatic brightness settings and resolves a tap-to-pay problem on Windows Mobile.

More new features unveiled during a live-streamed Windows event last week will be rolled out to developers in additional builds over the coming weeks, according to Dona Sarkar, software engineer for the Windows and Device Group.

“As I mentioned last week, Windows is an iceberg, the features that people ‘see’ are quite a small percent of the engineering work that we do to enable new UI to be visible,” Sarkar wrote yesterday in a blog post. “We’re excited to get more of the new Creators Update features in the hands of Insiders in the next couple of months.”

Starting next Monday, Nov. 7, Microsoft will also kick off its Bug Bash for Windows engineers and Insiders on the same day, Sarkar said. In the past, in-house engineers could get started a day ahead of Insiders on Microsoft-issued “quests” for glitches and problems. The Bug Bash is scheduled to run through Nov. 13.

Credits: Heise

Credits: Heise

The software Collections 2.3, which are now available in the beta-file, besides the stand-alone Eclipse Neon also include PHP 7.0 and MySQL 5.7.The Developer Toolset 6.0 has for the first time GCC 6 on board.

Red Hat has four months after the previous release published updated versions of the Software Collections and the Developer Toolset. In the version 4.6.1, Eclipse Neon migrates for the first time from the toolset as an independent collection into the software Collections 2.3. Other new additions are MySQL 5.7, Redis 3.2 and PHP 7.0 as well as Git 2.9 and the JVM monitoring tool Thermostat. There is also an update of the MongoDB database to version 3.2. Some scripting languages ​​are also included in more recent versions, including PHP 5.6, Python 3.5, and Ruby 2.3.

The Developer Toolset 6.0 includes GCC 6 for the first time, as well as version 6.2.1. In addition, there are numerous updates to the utilities, including binutils 2.27, elfutils 0.167, Valgrind 3.12, SystemTap 3.0 and Dyninst 9.2.0.

Additional information is available in Red Hat’s developer blog . Currently, the software Collections 2.3 as well as the Developer Toolset 6.0 are located in the Betaphase. Both packages are part of RHEL subscriptions (Red Hat Enterprise Linux), which also now have the free Developer Systems Enterprise Linux Developer Suite is available.

Credits: Toptechnews

Credits: Toptechnews

Fighting computer viruses isn’t just for software anymore. Binghamton University researchers will use a grant from the National Science Foundation to study how hardware can help protect computers too.

“The impact will potentially be felt in all computing domains, from mobile to clouds,” said Dmitry Ponomarev, professor of computer science at Binghamton University, State University of New York. Ponomarev is the principal investigator of a project titled “Practical Hardware-Assisted Always-On Malware Detection.”

More than 317 million pieces of new malware–computer viruses, spyware, and other malicious programs–were created in 2014 alone, according to work done by Internet security teams at Symantec and Verizon. Malware is growing in complexity, with crimes such as digital extortion (a hacker steals files or locks a computer and demands a ransom for decryption keys) becoming large avenues of cyber attack.

“This project holds the promise of significantly impacting an area of critical national need to help secure systems against the expanding threats of malware,” said Ponomarev. “[It is] a new approach to improve the effectiveness of malware detection and to allow systems to be protected continuously without requiring the large resource investment needed by software monitors.”

Countering threats has traditionally been left solely to software programs, but Binghamton researchers want to modify a computer’s central processing unit (CPU) chip–essentially, the machine’s brain–by adding logic to check for anomalies while running a program like Microsoft Word. If an anomaly is spotted, the hardware will alert more robust software programs to check out the problem. The hardware won’t be right about suspicious activity 100 percent of the time, but since the hardware is acting as a lookout at a post that has never been monitored before, it will improve the overall effectiveness and efficiency of malware detection.

“The modified microprocessor will have the ability to detect malware as programs execute by analyzing the execution statistics over a window of execution,” said Ponomarev. “Since the hardware detector is not 100-percent accurate, the alarm will trigger the execution of a heavy-weight software detector to carefully inspect suspicious programs. The software detector will make the final decision. The hardware guides the operation of the software; without the hardware the software will be too slow to work on all programs all the time.”

The modified CPU will use low complexity machine learning–the ability to learn without being explicitly programmed–to classify malware from normal programs, which is Yu’s primary area of expertise.

“The detector is, essentially, like a canary in a coal mine to warn software programs when there is a problem,” said Ponomarev. “The hardware detector is fast, but is less flexible and comprehensive. The hardware detector’s role is to find suspicious behavior and better direct the efforts of the software.”

Much of the work–including exploration of the trade-offs of design complexity, detection accuracy, performance and power consumption–will be done in collaboration with former Binghamton professor Nael Abu-Ghazaleh, who moved on to the University of California-Riverside in 2014.

Lei Yu, associate professor of computer science at Binghamton University, is a co-principal investigator of the grant.

Grant funding will support graduate students that will work on the project both in Binghamton and California, conference travel and the investigation itself. The three-year grant is for $275,

Credits: Timesunion

Credits: Timesunion

General Electric Co. has started using augmented reality devices as the company takes a major plunge into the use of artificial intelligence and virtual reality.

At the 2016 GE Minds + Machines conference held this week in San Francisco, Colin Parris, the vice president of GE Software Research, demonstrated how employees are talking to machines and interacting with them using Microsoft’s HoloLens augmented reality device.

GE has created so-called “digital twins” of the machines that it sells — a steam turbine for instance — that are digital replicas of actual machines at customer sites. The company has created a software system that allows customers to speak to the digital twin and ask it questions about potential parts breakdowns, financial forecasts and the best way to fix problems.

The digital twins are loaded with data they can crunch to provide the best advice — which is given in real language not unlike Siri on the iPhone.

 

“This is happening now,” Parris, who works in Niskayuna, said after he talked to a digital twin of a steam turbine at a customer site in Southern California. “What you saw was an example of the human mind working with the mind of a machine.”

The digital twin can run thousands of simulations at a time using environmental and operational data to predict breakdowns or other events.

And when a machine needs to be fixed, GE and its customers can use augmented reality to look inside those machines without having to actually touch them.

Parris put on a Microsoft HoloLens — an augmented reality headset — to superimpose the digital twin over a picture of the actual steam turbine. The HoloLens allowed him to open up the turbine and look at the parts — and see exactly which part may need replacing.

Parris said GE has been partnering with Microsoft on augmented reality technology. He says that AR as it is also called can help GE executives redesign a factory floor by moving parts around in augmented reality.

It can also help with training and production, helping to teach workers how to assemble parts even before they ever step on a factory floor.

Credits: Gamasutra

Credits: Gamasutra

The Gamasutra Job Board is the most diverse, active and established board of its kind for the video game industry!

Here is just one of the many, many positions being advertised right now.

Software Engineer, Wargaming

Location: Sydney, New South Wales, Australia

Wargaming Sydney is seeking an experienced Software Engineer to join our friendly team. We are looking for engineers who have good knowledge of low level systems programming and are looking to transfer to the exciting world of video games. Your primary responsibility will be to work on our PC engine.

What you will bring:

  • Several years proven commercial C/C++ experience
  • Understanding of object orientated analysis and design
  • Excellent knowledge of C++
  • Great problem solving skills
  • Strong debugging skills
  • Strong performance analysis and optimisation skills
  • Ability to work with existing development processes and codebase
  • Ability to work and collaborate in a team

It would be great if you also have:

  • Bachelors degree or equivalent in Computer Science or related fields
  • Great understanding of algorithms and techniques used in 3D games
  • Experience with other platforms (PS4, Xbox One, OS X, iOS, Linux)
  • Experience in OpenGL or DirectX
  • Experience with QT and Tool development
  • Knowledge of content creation pipelines

If you are passionate about the games industry, and enjoy solving technical and design challenges creatively please apply using the link on this page or forward your resume by email to: jobs_sydney@wargaming.netto embark on your career with Wargaming!

Only successful applicants will be contacted.

About Wargaming.net:

Wargaming Sydney is the Australian branch of Wargaming.net.

The Sydney office works on the cutting edge online game engine called BigWorld Technology, used by Wargaming studios around the world to power games such as World of Tanks, which has over 100 million players.

BigWorld Technology is the product of a creative, dynamic and innovative team working in an environment that is challenging, exciting and constantly evolving.

We like to pride ourselves in being a professional, friendly team environment with flexible work hours and no crunch time! On offer are our games room complete with VR, playstation / X-box, an arcade style video game machine, and table tennis just to name a few.

Interested? Apply now.

About the Gamasutra Job Board

Whether you’re just starting out, looking for something new, or just seeing what’s out there, the Gamasutra Job Board is the place where game developers move ahead in their careers.

Gamasutra’s Job Board is the most diverse, most active, and most established board of its kind in the video game industry, serving companies of all sizes, from indie to triple-A.

Credits: Cio-today

Credits: Cio-today

Tech giant Cisco is bulking up its enterprise Relevant Products/Services security offerings with a new endpoint security tool. The company launched Cisco AMP for Endpoints as part of its annual Cisco Partner Summit taking place in San Francisco this week.

The new tool aims to combine prevention, detection, and response into a single platform that takes a more aggressive approach to security than a prevention-only strategy.

“By leveraging the scale and power of the cloud and Cisco’s threat-centric security architecture, AMP for Endpoints (pictured above) allows customers to see and stop more threats, faster,” the company said in a statement.

A New Approach to Endpoint Security

The company was critical of other tools that adopt a prevention-only strategy, arguing that taking such a relatively passive attitude toward security was inappropriate given the modern landscape of threats in the cyber world. This is partly due to an overreliance on legacy tools that may have been patched with additional upgrades over time but are still not suited to protecting modern network infrastructure yet add to the complexity of security solutions.

“With the fact that it takes enterprises, on average, over 100 days to detect a threat in their own environment, it is clear that organizations need a new approach to endpoint security,” the company said.

AMP for Endpoints will provide enterprises with a simpler and more effective solution for endpoint security by combining prevention, detection and response in one SaaS-deployed, cloud-managed solution, according to Cisco. The new tool reduces complexity by combining multiple capabilities into a single platform, the company aaid.

More Effective Responses

To boost the prevention capabilities of AMP for Endpoints, Cisco is giving the tool access to global threat intelligence from Talos, its in-house cybersecurity intelligence organization. It will also include built-in sandboxing technology to quarantine and analyze unknown files, the company said.

AMP will also offer greater visibility and faster detection through continuous monitoring and shared analytics to detect stealth attacks, according to Cisco. AMP for Endpoints will record all file activity to monitor and detect malicious behavior, which it can then use to alert security teams. The platform shares and correlates threat information in real time, which should help reduce time to detection to minutes, the company said.

In addition, Cisco said AMP will offer enterprises a more effective response, thanks to the platform’s deep visibility and a detailed recorded history of the behavior of malware over time, including details such as where it came from, where it has been, and what it has been doing.

AMP for Endpoints accelerates investigations and reduces complexity through a cloud-based user interface that searches across all enterprise endpoints for Indicators of compromise, Cisco said. Users can then systemically respond to attacks across PCs, Macs, Linux and mobile devices, removing malware with a few clicks.

Credits: Infoworld

Credits: Infoworld

Eclipse Che 5.0 is making accommodations for Docker containers and Language Server Protocol across multiple IDEs. The newest version of the Eclipse Foundation’s cloud-based IDE and workspace server will be available by the end of the year.

The update offers Docker Compose Workspaces, in which a workspace can run multiple developer machines with support for Docker Compose files and standard Dockerfiles. In the popular Docker software container platform, a Compose file is a Yet Another Markup Language (YAML) file defining services, networks, and volumes; a Docker file is a text document with commands to assemble an image. Che also has been certified for Docker Store, which features enterprise-ready containers. In addition, Docker is joining the Eclipse Foundation and will work directly with Che.

OpenShift, Red Hat’s cloud application platform, gets a thumbs-up in Che 5.0. “Che will support running on OpenShift, including distributing workspace runtimes to operate as OpenShift pods. This will complement our existing OpenShift plugin for deploying your projects to OpenShift,” Jewell said.

Developers who adopt the 5.0 upgrade can live-sync workspaces and projects to desktops so that they can be used with local IDEs. To improve deployment, Che can take a production image and mount source code inside while adding an artifact repository and injecting agents for SSH, terminal, or Intellisense. This helps eliminate surprise production deployment problems, said Jewell. The stack editor in the upgrade, meanwhile, creates custom runtimes for Che workspaces based on a user’s software and environment, while controlling required resources and agents.

Credits: Dzone

Credits: Dzone

In 1954, the high-level general purpose programming language Fortran was created at IBM. In this year, there were a few options to choose from in the software engineering area. Nowadays, we have a lot of options in our hands and each day, this number grows, as do the number of decisions.

The Java platform is an example of the assertion above in which we evaluate the options available for us (non-commercial and commercial options). An underlying choice we have to make involves the IDE. It should be an easy decision; after all, an IDE (in this case) only needs to support one language through some versions, but the reality is that we have some options like Eclipse, IntelliJ IDEA, NetBeans, and Rational Application Developer. The decisions don’t stop here; we have to choose between others options such as:

  • Server application (Tomcat, Wildfly, Weblogic, Glassfish).
  • Web framework (Spring, Java EE, Play, Grails).
  • Persistence libraries (Hibernate, EclipseLink, jOOQ, Spring JDBC).
  • Presentation libraries (JSF, JSP, Wicket or some library beyond the Java Platform).
  • Package management tools (Maven, Gradle, Ivy).
  • Continuous Integration tools (Jenkins/Hudson, Bamboo, TravisCI).

Wait! These decisions are easy. We could choose the following options: Eclipse, Wildfly, Spring, Hibernate, JSP, Maven, and Jenkins. No further decision is required, right? No! This is just one level of decisions. Others levels, for me, are the following:

  • The plugins or subprojects of the options above or any other libraries provided by the open-source community or over commercial licenses used to facilitate our work.
  • The specific projects developed inside the organizations.
  • The upper level involving the options of technologies and languages to be chosen, like Cloud Computing Platforms and Ruby language.

Considering these levels of decision-making, we can have a scenario where we need to choose a library to deal with date and time and have the following options on our hands: JDK java.util.* API, joda-time API, or xtime API developed by the team inside the organization.

Until now, I used the Java Platform as an example. There are other general purpose platforms like Ruby or DOT.NET as well as specific purpose languages like Scala, R, Go, or Perl. To worsen this, we can have two or more platforms involved in our projects and thereby more options from which to choose.

So, how do we deal with this complex universe of options to choose? This doesn’t have a simple answer. Thus, I suggest the following practices for dealing with it:

  1. Make the decisions together with your team or even some teams. Thus, the decisions are shared and debated between some professionals with different experiences to thereby reach the better decision.
  2. The most popular options (platforms, technologies, tools, or libraries) must be considered the first option in decision-making, whereas the options can be shared with the whole organization or a group of teams, the most popular options are better to promote the alignment of knowledge between the professionals and reduce technical impediments.
  3. Test the new options, and when the knowledge is dominated and minimally disseminated, use them. To introduce a new option, evaluate it before use. Moreover, disseminate the results of the evaluation with other professionals to collect opinions and thereby take a better decision.
  4. Don’t create a new option like library or framework in the community or in your organization unless you have great justifications and differentials. Prefer to contribute to an existing project instead of creating a new one. Only invest your time in a new option if the community or your organization doesn’t support what you need.
  5. Don’t try to embrace the whole universe. The universe of options is growing up continuously, and because of that, it will not be feasible to use or know about everything at the same time. Instead, choose one option per technical requirement and change this option after a maturation time.