credits : Coindesk

 

Major ethereum clients, including Go-Ethereum (Geth) and Parity, have released software updates following an earlier decision to delay the planned system-wide upgrade dubbed Constantinople.

The upgrade was postponed Tuesday during a developers call, a move that came after blockchain audit firm Chain Security discovered a security vulnerability in Ethereum Improvement Proposal (EIP) 1283, one of the planned changes included in Constantinople. If exploited, the bug would have allowed for “reentrance attacks,” allowing malicious actors to withdraw funds from the same source multiple times.

A new activation block for the upgrade will be decided during another call later this week.

In order to prevent the fork from happening – given that some of the software clients on the network had already been updated ahead of the fork – developers of the major ethereum implementations moved to publish new versions.

Geth released an emergency hotfix (version 1.8.21) designed to delay the upgrade, though developer Péter Szilágyi noted that users who do not wish to upgrade to the new version of the client can also downgrade their existing clients to version 1.8.19 or continue running the current version (1.8.20) with an override.

Parity clients can similarly either upgrade their existing clients to 2.2.7 (the stable release) or 2.3.0 (a beta release) or otherwise downgrade to 2.2.4 (beta).

Parity Technologies head of security Kirill Pimenov, speaking in an ethereum core developers chat on Gitter, said he recommended users upgrade to the new release, rather than downgrade to an older version, explaining:

“I want to restate — downgrading Parity to pre-Constantinople versions is a bad idea, we don’t recommend that to anyone. Theoretically it should even work, but we don’t want to deal with that mess.”

Similarly, Parity release manager Afri Schoedon told CoinDesk that he recommends 2.2.7, though the other two should work as well.

In a blog post, core developer Hudson Jameson wrote that anyone who does not run a node or otherwise participate in the network does not need to do anything.

Smart contract owners do not need to do anything either, though “you may choose to examine the analysis of the potential vulnerability and check your contracts,” he wrote.

However, he pointed out that the change that could introduce the potential issue will not be enabled.

As of the blog post’s publication, security researchers with ChainSecurity, who initially discovered the bug, and TrailOfBits are analyzing the overall blockchain.

Reentrance attacks

So far, no instances of the vulnerability have been discovered in live contracts. However, Jameson noted that “there is still a non-zero risk that some contracts could be affected.”

In order for transfers on ethereum to avoid reentrance attacks, a small amount of ether called gas is paid which prevents attackers from repurposing a transfer to steal funds.

However, as explained to CoinDesk by Hubert Ritzdorf – the individual who found the vulnerability and CTO of Chain Security – a “side effect” of EIP 1283 ensures attackers can leverage this small amount of gas for malicious purposes.

“The difference is before you couldn’t do something malicious with this little bit of gas, you could do something useful but not something malicious and now because some of the operations became cheaper, now you can do something malicious with this little bit of gas,” said Ritzdorf.

And though the issue of reentrancy is always on the minds of smart contract developers coding in Solidity on ethereum, Matthias Egli – COO of Chain Security – explained that core developers strictly looking at the mechanics of the virtual machine couldn’t have easily spotted this vulnerability.

He told CoinDesk:

“It’s a Solidity thing, it’s not an [ethereum virtual machine] core thing that in practice allowed this attack. That was part of this disconnect that in practice small changes to gas cost will allow new kind of attacks which wasn’t considered before.”

What’s more, Ritzdorf added that the fix to this issue isn’t as easy as updating ethereum’s gas cost limits, explaining that “if we change this amount to a small number now then we would fix the vulnerability but we would also break many existing [smart] contracts.”

As such, for the time being, a delay to Constantinople was the right call by core developers according to Egli.

“It was the right decision because it at least buys some time for researchers to evaluate the real world impact. With high likelihood, this [EIP] will be taken back and not included in the upcoming hard fork which is now delayed by perhaps a month,” he contended.

Next steps

As of press time, developers are contacting exchanges, wallets, mining pools and other groups which use or interact with the ethereum network.

Core developers plan to discuss longer-term steps – including when to execute Constantinople and how to fix the bug in EIP 1283 – during another call on Jan. 18.

Multiple developers suggested initiating some sort of bug bounty program focused on analyzing the code, in order to ensure future bugs are discovered well in advance, rather than “right before [hard fork] day.”

Szilágyi noted that the EIP had been available for review for nearly a year, adding that “maybe it’s not a bad idea to do some grants for more focused eyes.”

 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Forbes

 

Cast Software is known for its technology platform that provides metrics, quality and software intelligence ratings to determine the validity, strength and functionality of any particular application.

The firm held its annual software intelligence forum in Paris (where it has an HQ) last week to attempt to explain why we all need to build software with a little more care and attention in order to make sure it is robust, safe & secure and above all fit for purpose.

“Organizations must now migrate to the cloud and [as part of that process] work to re-think software systems for greater agility, improved security and data integrity,” said Vincent Delaroche, chairman and CEO of Cast. “All of this is driving the adoption of Cast ‘Software Intelligence’ as business and IT leaders look for more accuracy, visibility and control over compliance, security and modernization risk.”

MRI scan for software

CEO Delaroche has likened his firm’s branded ‘Software Intelligence’ approach to that of providing an MRI scanner for software applications in order to perform triage and identify where the most severe problems exist… in order to address those first.

Just as an MRI scanner uses radiology to build pictures of our own human anatomy in order to study the physiological processes of the body, Cast uses its approach to software intelligence to study the way software ‘compiles & executes’, makes ‘calls’ to various data resources and the way it connects to other networks, Application Programming Interfaces (APIs), cloud computing channels… and to ultimately to the devices we all use.

Essentially, in terms of its technology proposition, Cast wants software to compile and run according to acknowledged global software engineering standards. Logically then, in terms of its commercial proposition, Cast wants customers to pay it to analyze their software in order to provide them with a bill of operational health, so-to-speak.

“For real-world architecture there are blueprints and building regulations. For software there is no equivalent. Like building regulations, we need standards for code,” said Lev Lesokhin, executive vice president for strategy and analytics at Cast Software.

Lesokhin claims that ‘most people’ (by which he means anybody except the software development/programmer team) know nothing about the software that runs their business — and that they don’t typically want to know. He further reminds that while the software industry is full of methodologies, de facto models and a multiplicity of standards — none of these guidelines necessarily exist to measure the ultimate quality of the software being produced, developed and deployed.

Speakers at Cast’s Paris event included Pierantonio Azzalini in his role as CTO for Italian shipbuilding company Fincantieri. Azzalini explained that he operates 20 shipyards around the world and currently has a backlog of €33 billion, while he is also taking orders now for 2027, so productivity is a major concern.

“As a CTO, the only metrics I had was Lines of Code (LOC) and man [he means person] days. But I was using this 15 years ago. If you don’t maintain software, you can go out of business in 10 days. Software Intelligence is not something theoretical it is a ‘must for today’s IT. The most difficult thing is the relationship between an engineering business and IT. If you have a room full of people who are not familiar with IT, it is useful to have a standard to present how software metrics/quality is improving,” said Azzalini.

Software Composition Analysis

Cast Software had a busy 2018 by all accounts. The company acquired Antelink, a Software Composition Analysis (SCA) company founded by Inria, a public science and technology institution dedicated to computer science learning.

Antelink’s technology will be integrated into Cast Highlight, a cloud SaaS-based application portfolio analysis product designed to calculate and assign a unique SHA1 signature, a crypto hash function from the National Security Agency, to each component of complex software, including open source frameworks. These ‘fingerprints’ can be compared to reference databases of software components.

According to Cast, “The Software Heritage archive contains information about known application security vulnerabilities in addition to copyrights for all known software in use, including open source components. This type of knowledge is essential in scenarios where a Bill of Materials is required, such as outsourcing software development, buying software assets or during a merger or acquisition. SCA capabilities are becoming increasingly important for digital transformation success and improving the application security of business-critical systems.”

What to think next

Cast points to the increasing use of open source software and claims that its technology is well-suited to code analysis (and of course software intelligence) to analyze the burgeoning amount of code that is growing in the open arena.

CEO Delaroche says that his firm spent 11 million Euros on research and development in 2018, but that this expenditure was not simply focused on putting code analytics functions into its software. Instead, although code analytics is fundamental to what Cast does, it was focused on the wider development of (and simplification of) the total Cast platform.

Using his native French, Delaroche says that too many people want to, “Cacher la merde sous le tapis.”

So, if we follow his colorful use of language, now is not the time to hide any of your nasties under the carpet… now is the time to get it all out in the living room and work out what needs to go in the trash.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

 

Credits : Laravel-news

 

The idea of working entirely from an iPad has always appealed to me. The portability, the battery life, and of course the touch screen makes it an excellent device.

The downside is that everything is sandboxed. You can only run programs from the App Store which makes doing crazy things like installing a development server on the machine unattainable, but there are other ways to work around the limitation with existing apps and a little ingenuity.

Last month I had an unfortunate biking accident and broke the bones in the back my left hand. After surgery, they had my middle, ring, and pinkie fingers fused so everything could heal properly, but this meant I couldn’t use that hand for typing. I had to embrace typing one-handed, and I found using an iPad to be the easiest because of the autocomplete and the autocorrect. I was much faster with it than on a MacBook, and that pushed me to want to use it more.

With my job, I was able to do all my work except development with the iPad, and I would switch back and forth. Using a traditional computer with my code editor, then the iPad for everything else. This flow quickly became annoying, and I started exploring ways of doing everything on the iPad.

Screen Sharing

My first idea was to just screen share back to my Mac. Using a screen share would allow me to use all the tools I’m comfortable with and still use the iPad. That idea worked better in theory rather than practice. I couldn’t get the screen resolutions to match up, so everything was tiny on the screen, and had 2 inches of letterboxing.

With this not being suitable I went back to looking for other ways of solving this and found that some people reported good results using a development server and then a text editor like Vim.

Laravel Forge

Once I decided to go with a dev server I had to figure out how I wanted to set it up. Not being a server admin and honestly not being extremely comfortable on servers I decided to follow the path of least resistance as a Laravel Forge customer that meant logging into my account and spinning up a new box. Then I needed to add all the sites. I used a generic domain name and then set up each site as a subdomain.

Setting up each dev site through Forge works, but if you work on many projects, it’s probably not ideal. You would be better off using something fancy like Nginx wildcard routing.

With the server up and running it was time to figure out how to get into from the iPad.

SSH’ing

The App Store had a few different SSH apps, and all looked like they would be sufficient, but I didn’t have time to test them all. Based on the reviews and the app screenshots I decided to try Termius first and so far it has met my needs.

From within Termius, you can create an SSH key to add to the server for passwordless login; it supports SFTP for moving files up to the server and a lot more features that I’ve yet to need.

Honestly, using Forge and Termius has been great. It’s simple to set up and easy to get started with, especially if you’ve been running Digital Ocean or Linode servers in the past.

Since the server runs on the web, I set up two aliases to make it easy to take it online or off.

alias do_allow80=”sudo ufw allow 80″

alias do_block80=”sudo ufw deny 80″


These two commands to turn port 80 on or off, of course, you might want to get fancier and block all traffic except your IP and include port 443 for SSL.

Once the server is set up, and you can SSH in it’s time for the fun part, learning VIM.

VIM

I didn’t want to use Vim, but every iOS editor app was severely lacking in features that I use. I could either adapt my workflow to them or use a tool that I can customize to my needs and match my existing setup.

I’ve never been a big Vim user. I knew the basics on how to use the hjkl keys to move around, how to save, how to exit, but that’s about it. So I’ve spent the majority of time learning the ropes, and I’m still not very fast using it. Vim is a learning process.

What is excellent about the editor and something I’ve never appreciated until now is how many useful plugins the ecosystem has. I was able to duplicate most of the things I use in Sublime or Code with only a few plugins. I can use ctrl+p to fuzzy find files, ctrl+e to open recent files, ctrl+l to open a sidebar of the directory structure, and even goto definition with the help of AG.

To set up the editor I first watched the Laracasts Vim series and then found a plugin named FZF that has been an enormous help for navigation. With it installed I can do things like:

  • control+p to fuzzy open files :GFiles
  • control+t to find methods within a file :BTags
  • control+e to open recent files :History

Plus many other things

Check out this post by Jesse Leite for more FZF tips.

I have my full .vimrc available on Github if you’d like to see my exact setup.

Once I get more comfortable with the basic tasks I have a feeling Vim could be something I could learn to love.

Database

For database access, I first found a tool called MyCLI for the terminal, and it’s fantastic with autocomplete, and syntax highlighting. It works great but I still prefer a GUI, and the only app I could get to work was Navicat on iOS. The other apps I tried didn’t support connecting over SSH with a key.

I find both of these options lacking compared to Sequel Pro, but they are usable enough.

Frontend Coding

At this time, this is the most significant limitation. I’ve not found a workaround, but I’ve also not had to do much JavaScript or CSS work. I’ve had some people recommend services like Browsershots to get a full browser, or you could use screen sharing to any Mac. I feel like this will be a breaking point for many people since so much of web development is now on the frontend.

Keyboard

You can work using nothing but the on-screen keyboard or with Apple’s official Smart Keyboard but neither come with an escape key. For Vim I remapped around this, but I found not having it annoying, and I found a tiny $40 mechanical keyboard that I’ve been using that works great. It includes the escape key and is about the same width as the 11″ iPad. I wrote up a full review on the keyboard on my site.

Conclusion

I know everybody works on different things and has different preferences so this setup isn’t for everyone. Before this iPad, I was working on a 12″ MacBook so I’m used to small screens and I enjoy running apps at full screen. So making the switch was pretty easy for me. The only thing I’m missing is a console and web inspector.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Techradar

 

If you don’t use a software updater, you might be missing out on important patches to some of the programs you use every day. Many programs automatically update themselves so you can be sure that you’re always using the most recent (and most secure) version, but this is’t the case for all software, and some updaters works in different ways.

For example, you may not be offered updates for programs you don’t use very often, and it can be difficult to remember to launch programs just to see if there’s anything new to download.

This is where a dedicated software updater can help you out. These handy utilities will scan your computer to determine what you have installed, and will then go online to see if there are new versions of any of your applications available. Some utilities will automatically install the updates for you, while others will simply let you know that there’s an update available. Either way, you’ll be able to ensure you’re running the very latest versions of all your favorite programs with very little effort.

1. Patch My PC Home Updater

The quickest, easiest way to update your software

Portable app
Automatic scans
One-click updates
Not the biggest database

Patch My PC has been around for a while now, and it has gained a large following – something you’ll understand once you try it out. This is a portable app, making it ideal for sticking on a USB drive and keeping friends’ and family’s computer updated, and it’s delightfully simple to use.

As soon as you launch the program, it will automatically scan your computer, determine which software you have installed, and quickly let you know which needs to be updated. The database of programs it supports is not completely exhaustive, but it’s pretty comprehensive.

If there are any out-of-date programs detected on your computer, you can start to update all of them with a single click – there’s no need to manually start each updater as each of the updates will be downloaded for you in turn. Many programs will update ‘silently’ without the need for any intervention, but for some you will be prompted to allow the update to continue. As an added bonus you can configure update checks on a scheduler so you don’t need to remember to run them manually. Great stuff!

2. Downloadcrew UpdateScanner

Automatically check for updates to hundreds of applications

Automatic and manual scans
Huge software database
Not the most intuitive

Drawing on its sizeable and growing database of software, the Downloadcrew UpdateScanner is able to check for updates across a huge number of titles. The program can be configured to start automatically with Windows and check for updates every time you start your computer, or you can schedule scan for a particular time of day. You can, of course, opt for a manual scan if you prefer.

While the program is undeniably powerful and very thorough when it comes to checking for updates, the way it works is not as smooth an intuitive as some of its rivals. The updater sits in your system tray and a pop up lets you know when something is available to download. Click the notification and the main program interface will appear, complete with links to endless program you may want to install.

Hiding at the top of the screen is a link to download the available updates, and clicking this takes you to the Downloadcrew website where you can download the newest versions of software manually.

3. SUMo

Can check for beta versions
Can exclude certain programs
Not the fastest
Automatic updates aren’t free

wrestling – not that we really imagined that you thought that! The name is short for Software Update Monitor, and it does very much what you would expect it to. There’s a slight problem, through: it does it a little slowly.

As you would hope, the program scans you hard drive for software so it knows what you have installed, and this process can be a little on the slow side.

SUMo will then let you know of any programs which need updating and you can manually select those you want to update and download the latest version from the SUMo website.

If you want the advantage of automatic updating, you’ll have to shell out for the Pro version of the tool. There are some nice touches such as being able to check for beta versions of software, and the option to choose to ignore (ie never check for) updates for certain programs. There’s also a secondary tool available, DUMo, that can be used to check for driver updates. A perfect companion.

  • SUMo
  • 4. OutdateFIGHTER

    Not the most comprehensive, but includes some handy extras

    Includes software uninstaller
    Finds fewer updates than rivals
    Contains ads

    In tests, OUTDATEfighter seems to be rather more limited than the competition. The utility found fewer updates than alternatives update tools did, and this raises the question of whether it is going to miss something important when it really matters.

    In addition to this potential problem, the program interface serves as an advertising billboard for other products by the same company. You’ll find toolbar buttons that link to information about utilities to speed up and protect your computer in a variety of ways.

    OUTDATEfighter can also be used to uninstall software you no longer need, as well as managing Windows Updates – it’s not really clear, however, why you would want to go down this route rather than simply using Windows’ own tools.

    Ultimately, your mileage may vary with OUTDATEfighter. You may be in luck and find that all of your installed software is supported and detect. It’s worth testing it to find out.

    • OutdateFIGHTER

    5. Glarysoft Software Update

    Great detection rates, but updates aren’t automatic

    Well designed interface
    Extensive database
    No automatic updates
    Extra bundled software

    Glarysoft has a glorious history of releasing outrageously useful utilities for Windows, so the hope is very much that Glarysoft Software Update makes the grade

    Advertisement

    The good news is that it does. This is a quality tool with a great, professional feel and a high update detection rate. For system administrators and home with multiple computers, there is a remote update option that lets you administer other computers from afar. A lovely idea.

    Sadly, as with many other update tools, the update process is a manual one – unless you are willing to pay for an upgrade to the Professional version, in which case it can be automated. A nice touch here is that you are given trial access to the Software Update Professional so you can get an idea of how it works and whether it is worth your money.

    A word of warning. Take care during the installation of the program that you do not unwittingly install the extra Malware Hunter tool that’s offered to you. You don’t need it.

    • Glarysoft Software Update
    • The best free software uninstaller

     

    This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Zdnet

 

For at least three years, hackers have abused a zero-day in one of the most popular jQuery plugins to plant web shells and take over vulnerable web servers, ZDNet has learned.

The vulnerability impacts the jQuery File Upload plugin authored by prodigious German developer Sebastian Tschan, most commonly known as Blueimp.

The plugin is the second most starred jQuery project on GitHub, after the jQuery framework itself. It is immensely popular, has been forked over 7,800 times, and has been integrated into hundreds, if not thousands, of other projects, such as CMSs, CRMs, Intranet solutions, WordPress plugins, Drupal add-ons, Joomla components, and so on.

A vulnerability in this plugin would be devastating, as it could open gaping security holes in a lot of platforms installed in a lot of sensitive places.

This worse case scenario is exactly what happened. Earlier this year, Larry Cashdollar, a security researcher for Akamai’s SIRT (Security Intelligence Response Team), has discovered a vulnerability in the plugin’s source code that handles file uploads to PHP servers.

Cashdollar says that attackers can abuse this vulnerability to upload malicious files on servers, such as backdoors and web shells.

The Akamai researcher says the vulnerability has been exploited in the wild. “I’ve seen stuff as far back as 2016,” the researcher told ZDNet in an interview.

The vulnerability was one of the worst kept secrets of the hacker scene and appears to have been actively exploited, even before 2016.

Cashdollar found several YouTube videos containing tutorials on how one could exploit the jQuery File Upload plugin vulnerability to take over servers. One of three YouTube videos Cashdollar shared with ZDNet is dated August 2015.

It is pretty clear from the videos that the vulnerability was widely known to hackers, even if it remained a mystery for the infosec community.

But steps are now being taken to address it. The vulnerability received the CVE-2018-9206 identifier earlier this month, a good starting point to get more people paying attention.

All jQuery File Upload versions before 9.22.1 are vulnerable. Since the vulnerability affected the code for handling file uploads for PHP apps, other server-side implementations should be considered safe.

Cashdollar reported the zero-day to Blueimp at the start of the month, who promptly looked into the report.

The developer’s investigation identified the true source of the vulnerability not in the plugin’s code, but in a change made in the Apache Web Server project dating back to 2010, which indirectly affected the plugin’s expected behavior on Apache servers.

The actual issue dates back to November 23, 2010, just five days before Blueimp launched the first version of his plugin. On that day, the Apache Foundation released version 2.3.9 of the Apache HTTPD server.

This version wasn’t anything out of the ordinary but it included one major change, at least in terms of security. Starting with this version, the Apache HTTPD server got an option that would allow server owners to ignore custom security settings made to individual folders via .htaccess files. This setting was made for security reasons, was enabled by default, and remained so for all subsequent Apache HTTPD server releases.

Blueimp’s jQuery File Upload plugin was coded to rely on a custom .htaccess file to impose security restrictions to its upload folder, without knowing that five days before, the Apache HTTPD team made a breaking change that undermined the plugin’s basic design.

“The internet relies on many security controls every day in order to keep our systems, data, and transactions safe and secure,” Cashdollar said in a report published today. “If one of these controls suddenly doesn’t exist it may put security at risk unknowingly to the users and software developers relying on them.”

Since notifying Blueimp about his discovery, Cashdollar has been spending his time investigating the reach of this vulnerability. The first thing he did was to look at all the GitHub forks that have sprouted from the original plugin.

“I did test 1000 out of the 7800 of the plugin’s forks from GitHub, and they all were exploitable,” Cashdollar told ZDNet. The code he’s been using for these tests is available on GitHub, along with a proof-of-concept for the actual flaw.

At this article’s publication, of all the projects derived from the original jQuery File Upload plugin, and which the researcher tested, only 36 were not vulnerable.

But there is still lots of work ahead, as many projects remain untested. The researcher has already notified US-CERT of this vulnerability and its possible impact. A next step, Cashdollar told ZDNet, is to reach out to GitHub for help in notifying all plugin fork project owners.

But looking into GitHub forks is only the first step. There are countless web applications where the plugin has been integrated. One example is Tajer, a WordPress plugin that Cashdollar identified as vulnerable. The plugin had very few downloads, and as of today, it has been taken off the official WordPress Plugins repository and is not available for download anymore.

Identifying all affected projects and stomping out this vulnerability will take years. As it’s been proven many times in the past, vulnerabilities tend to linger for a long time, especially vulnerabilities in plugins that have been deeply ingrained in more complex projects, such as CRMs, CMSs, blogging platforms, or enterprise solutions.

 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

 

Credits : Ciol

 

This article presents a hypothesis on what the (not too far in the future) world of AI assisted Software Development will look like. In a line, it’ll read something like this; concepts governing software creation will stay the same, but the pipeline is going to look incredibly different. At almost every stage, AI will assist humans and make the process more efficient, effective and enjoyable.

Our hypothesis is supported by predictions that, the AI industry’s revenue will reach $1.2 trillion by the end of this year, up 70% from a year ago. Further, AI-derived business value is expected to reach $3.9 trillion by 2022. We have also factored in observation of three main themes over the last decade; compute power, data and sophisticated developer tools.

More Compute Power: Easy access to elastic compute power and public clouds have empowereddevelopers, enterprises and tool creators to quickly run heavier analysis workloads, through parallelization. According to IDC, cloud-based infrastructure spends will reach 60% of all IT infrastructure by 2020.

More Data: Improved processing power will see digital leaders investing in better collection and utilization of data – 90% of the world’s data was created last year but, utilization is at 1%. It’s slated to grow to 3% or 4% by 2020

Integration and Distribution of Systems: The integration of disconnected systems using APIs coupled with microservices pattern enables the distribution of previously monolithic systems. This leads to a powerful mix that leverages tools and processes (required for software development) composed of multiple systems, running in different places.

The software creation process consists of 3 phases. They can be further split into 9 different task categories. Interestingly, only some of these categories have seen more investment in AI powered tooling than others. In the course of this article, let’s discuss some of the instances where AI will assist technologists in software development by taking over data analysis and prediction capabilities. Such an evolution will permit technologists to have more time to focus on judgement and creativity related tasks that machines can’t take on.

There is an increasing presence for what we call Intelligent Development Tools. We believe this turn of events is because of the 3 themes, and the growing clout of developers, that have caused dozens of startups to offer developer-focused services such as automated refactoring, testing and code generation. The evolution of these tools can be compartmentalized into 3 levels of sophistication.

The Levels of Sophistication

The first focused on the automation of manual tasks that increased reliability and efficiency of software creation. For example, the test automation reduced cycle time through parallelizing which shortened feedback loops. The deployment automation improved reliability using repeatable scripts. However, it’s still been humans who analyzed and acted on the feedback.

The next level of sophistication covered tools that permitted machines to take decisions based on fixed rules. Auto-scaling infrastructure is a good example of this. Machines could now determine the required compute power to service loads being handled by an application, while humans configured the bounds and steps that the compute power could scale.

The final level of sophistication will enable machines to evolve without human intervention – analyzing data and learning from it, will empower tools to mutate or augment rules that allow them to take increasingly complex decisions. We wanted to share a few ideas of how AI can augment the software development cycle.

The Software Development Cycle

One of the most common approaches to building AI use cases is leveraging the neural network; a computer system modelled on the human brain and nervous system. The popular approach involves developing a single algorithm that encompasses the intermediate processing steps of multiple neural net layers, leading to direct output from the input data. This process is successful and provides very good results when large samples of labelled data is available. The challenge with this method is that the internal processing of learning is not clearly explainable and sometimes gets difficult to troubleshoot for accuracy.

Ideation – Analysis of usage data to find anomalies/unexpected behaviour.

Prototyping – Low / no-code tools to create clickable prototypes from hand-drawn sketches.

Validation – Leverage past usage data to test new designs/ideas.

Development – Automated code generation and refactoring.

Requirements Breakdown – Generation of positive and negative acceptance criteria based on past requirements.

Testing – Automating test creation and maintenance.

Deploy – Ensure zero impact deployments by predicting right time to deploy and rate of the roll-out.

Monitoring – Use Telemetry Data to predict hardware/system failure.

Maintenance – Automate identification and removal of unused features.

One of the most common approaches to building AI use cases is leveraging the neural network; a computer system modelled on the human brain and nervous system. The popular approach involves developing a single algorithm that encompasses the intermediate processing steps of multiple neural net layers, leading to a direct output from the input data. This process is successful and provides very good results when large samples of labelled data is available. The challenge with this method is that the internal processing of learning is not clearly explainable and sometimes gets difficult to troubleshoot for accuracy.

AI Assistance

Ideation Augmented: Take the example of an e-commerce website. Here, people analyze data to find where users drop-off during an ordering funnel and come up with ideas to improve conversion. In the future, we could have machines that blend usage analytics with performance data to derive if slow transactions are the cause for drop-offs. Additionally, these machines could also identify faulty code that when fixed, will improve performance.

Testing Augmented: Writing tests for legacy systems, even with documentation, is very hard. Automated test creation tools that leverage AI to map out the application’s functionality, using usage and code analytics, allow teams to quickly build a safety net around such legacy systems. This allows technologists to make changes without breaking existing functionality.

Maintenance Augmented: A large part of maintenance-related costs, today, are spent on managing redundant features. Identification of these redundancies is a complex error-prone process because people have to correlate data with multiple sources. Allowing AI tools to take up this role of connecting and referencing data across sources will automate marking of unessential features and associated code.

Given the nature of evolution in the dynamic software development world, here’s our recommendation for how to prepare and focus efforts –

1. Recognize and leverage elastic infrastructure which ensures the ability to add and remove resources ‘on the go’ to handle the load variation

2. Equip your teams to strategically collect and process data, an invaluable asset whose volume will only increase given the prevalence of emerging tech like voice, gesture etc.

Include a stream within investment strategies that grow AI assisted software creation – rule based intelligent tools and self-learning tools.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Searcherp.techtarget

 

Information security risks in supply chain software are becoming increasingly prevalent, particularly as global companies have become more dependent on third-party vendors.

According to Symantec, more and more attackers are injecting malware into the supply chain to infiltrate organizations. In fact, there was a 200% increase in these attacks in 2017 — one every month compared to four attacks annually in previous years.

Supply chain software offers a new arena to threat actors intent on penetrating enterprise networks, said Peter Nilsson, vice president of strategic initiatives at MP Objects, a provider of supply chain orchestration software in Boston.

“Previously, people had their ERPs behind their very tight firewalls, and no one from the outside could get in without being monitored by the hawk eyes of the IT department,” he said. “Now, enterprises are saying, ‘We need to collaborate with our partners and we have to open up our ERP and let them in.'”

But if those third parties don’t have adequate security, attackers can infiltrate their systems to attack the enterprise.

Any time an enterprise introduces software into the mix of its supply chain, it runs the risk of cybersecurity issues, said Justin Bateh, supply chain expert and professor of business at Florida State College in Jacksonville, Fla. Most risks are caused by not having the proper controls in place for third-party vendors.

“There are many low-tier suppliers that will have weak information security practices, and not having clean and limited guidelines for these providers about security expectations will pose a significant threat,” he said.

Causes of potential security risks

Poor internal security procedures and a lack of compliance protocols can also introduce potential threats, including marketing campaign schemes, privacy breaches and disruption of service attacks, according to Bateh.

In addition, smaller companies may use inadequate software coding practices. As such, larger enterprises can’t be sure the software is being checked for quality as it goes through its development cycle, said Lisa Love, owner and president of LSquared, an information security consulting firm in Greenwood Village, Colo.

Consequently, something as unintentional as bad scripting can introduce vulnerabilities into the providers’ supply chain software, as well as into the enterprise, which attackers could then exploit, she said.

Jason Rhoades, a principal at Schellman & Co., a provider of attestation and compliance services in Tampa, Fla., agreed that in recent years the enterprise’s attack surface has increased along with the tremendous growth in the supply chain.

“Looking at the recent Equifax breach confirms that vendor and supply chain software poses a true security risk that the enterprise cannot ignore,” he said.

Equifax blamed its 2017 breach on a flaw in the third-party software it was using. And the massive breach of Target’s systems in 2013 was caused by attackers who stole the login credentials of its HVAC contractor and used them to infiltrate Target’s network.

Jonathan Wilson, a partner at the law firm Taylor English Duma LLP in Atlanta, agreed that many security risks come from the data connections and handoffs in the supply chain moving from smaller to larger providers.

“A lot of these small companies and startups don’t have robust data security systems,” said Wilson, who has represented a Fortune 500 international supply chain logistics provider. “They get a breach or some sort of exploitation is involved, and by working their way up the chain, the attacker can utilize the permissions that the smaller vendors get to obtain access to the larger company’s system.”

Another way hackers could introduce risk into an enterprise is via the supply chain software itself, according to Michael O’Malley, vice president of strategy at Radware, a provider of cybersecurity services in Mahwah, N.J. Most supply chain applications have some type of web interface with a login page to ensure that only the right people are authenticated and allowed to access the application.

Attackers can also use credential stuffing to infiltrate an enterprise via an unprotected web interface, he said. The attackers can hack into the interface, enter a legitimate username and password, and pose as someone else.

“Or they do something else offline through a phishing email scam to get users of the software to click on a link or respond to an email and dupe them into sharing their credentials,” O’Malley said. “They can then use those credentials to log in or break into the application.”

Another way attackers can penetrate an enterprise’s network via the supply chain is from the inside, according to O’Malley. This is where IoT devices come into play. More and more of these supply chain software applications — particularly in high-tech manufacturing — are part of an IoT network that provides different diagnostics and information about the machines on a factory floor.

These devices are providing all this real-time input back to the supply chain managementsoftware application. However, they can be easily compromised because they tend to be very inexpensive Linux-based devices that weren’t designed with security in mind, and they don’t have the necessary protections against hacking, he said.

“What we commonly see is that within minutes of these devices being connected to the internet, someone infiltrates them and puts a piece of malware or a bad bit of code on them,” O’Malley said. “And those are then used later as an attack on something else or in an attack on the software application itself.”

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

 

Credits : Datanami

 

Application development has become a cloud-focused initiative in many enterprises. Consequently, the languages, tools, and platforms needed to support today’s development initiatives are rapidly evolving.

Application development is also a discipline in which data science is assuming a greater role. To support the growing range of development projects with artificial intelligence (AI) at their core, enterprises are having to continually transform DevOps workflows to support continual building, training, and iteration of deep learning, machine learning, and other statistical models for deployment into production cloud environments.

As we look ahead to 2019, we expect to see the following dominant trends in enterprise application development:

  • Open development ecosystems will be at the heart of every tool vendor’s go-to-market strategy: Practically every vendor—large and small, established and startup—has pinned its future on its participating in the open-source community. Some have taken that open commitment even further in 2018. In the year gone by, Microsoft took on a special status in the open-source world with its acquisition of GitHub, the foremost DevOps platform in the open-source ecosystem. In 2019, Microsoft will continue to abide by its express commitment to allow
  • GitHub to operate in vendor-agnostic fashion in support of any language, license, tool, platform, or cloud that developers wish to use. In addition, Microsoft is likely to open-source more of its software projects and to refrain from asserting IP claims on a wider range of its IP patents, consistent with its recent joining of the Open Invention Network. The vendor is likely to assume a more proactive role in the open-source community as an evangelist for the new era of post-proprietary software development.
  • Serverless will dominate new cloud-native application development: Cloud application developers flocked to functional programming, also known as serverless, in a big way in 2018. This trend shows no signs of slowing down, as evidenced by the growing range of serverless tools, interfaces, projects, and other initiatives that have come to market this year. It’s also evident in the eagerness with which developers are adopting these offerings. In 2019, we’re likely to see the open-source Knative serverless project implemented by many vendors beyond its core developers Google, Pivotal, IBM, Red Hat and SAP, with Microsoft, AWS, and Oracle likely to come on board during the year. In addition, it’s very likely that Knative will be submitted to CNCF for development and governance under its growing cloud-native stack.

Developers will build hybrid serverless and containerized cloud applications: Hybrid clouds are becoming common in many enterprise IT strategies. At the application level, more developers are building hybridized cloud applications that incorporate data, workloads, and other resources that span public and private clouds. In 2019, we’ll see more development tools that enable hybridization of heterogeneous containerization and serverless environments.

  • Adoption of the emerging Knative project will accelerate the creation of hybridized serverless applications that run over federated Kubernetes multiclouds.
  • Transactional applications will shift toward the cloud’s edges: Conversational commerce, Alexa style, is the harbinger of the more pervasive edge-commerce future that awaits us all. In 2019, developers will increasingly build transactional applications that are designed to operate over and entirely distributed IoT, edge, mesh, and other cloud fabrics. To support these radically decentralized environments, more enterprises will use blockchains and smart contracts to provide immutable logs, enable edge-to-edge transactional integrity, and ensure full transparency and accountability. However, it will take still 2-3 years, at least, for all necessary technological, commercial, regulatory, and other standard practices to coalesce into a new edge-based transactional backplane for any-to-any e-commerce.
  • Data-science workbenches will adopt standardized cloud-native DevOps: AI is the heart of modern applications. Developing AI applications for the cloud increasingly requires the building of containerized microservices that are orchestrated within and across Kubernetes clusters over DevOps workflows. In the past year, the AI community has developed an open-source project called Kubeflow that provides a framework-agnostic pipeline for making AI
  • microservices production-ready across multi-framework, multi-cloud computing environments. Early adopters of Kubeflow include Agile Stacks, Alibaba Cloud, Amazon Web Services, Google, H20.ai, IBM, NVIDIA, and Weaveworks. In 2019, we’ll see the project mature and be implemented more broadly in commercial AI DevOps toolchain solutions. In this way, more enterprise app-development teams will be able to align their DevOps processes across teams working on AI and other cloud-native development projects.
  • Python, Kotlin, and Rust will become core languages for building new applications: Mobile application developers will continue to rely on JavaScript, Java, Objective-C, and PHP. In 2019, other languages will grow in importance in developer toolkits to address the requirements of many hot new applications. Most importantly, Python has become the go-to language for AI, Internet of Things (IoT), Web, mobile, and gaming apps, owing to the fact that it’s easy to learn and use on practically any platform. Kotlin’s superior flexibility may enable it to replace Java at some point in the standard Android developer’s repertoire, while Swift’s compact, clear syntax is building momentum among iOS developers. Rust’s support for memory-safe concurrency gives it a leg up on other languages for IoT, embedded, and other applications that require always-on 24×7 robustness.
  • Client-side AI frameworks will transform Web application development: JavaScript frameworks such as React are the heart of rich application development for Web, mobile, and other client-side edge application platforms. In 2019, more developers will build edge applications in JavaScript frameworks that enable richly interactive browser-based experiences, platform-native performance parity, and AI-powered client-side intelligence. GPU-accelerated client-side AI will become the heart of edge applications, as adoption of such open-source frameworks as js,Brain.js, and TensorFire continues to grow.
  • Advances in GPUs will stimulate innovation in immersive applications: Users are adopting augmented, mixed, and virtual reality applications in a wider range of industrial, business, scientific, and consumer uses. Gaming, in particular, has been a huge growth area for these immersive applications, owing in part to the availability of high-performance, low-cost GPUs on more client platforms. In 2019, we’ll see this trend accelerate as the new Nvidia Turing GPUs, with their lightning-fast real-time raytracing, come to market in support of next-generation immersive apps that combine photorealistic visuals with AI-driven contextual intelligence. Developers will build a new generation of GPU-aware smart camera applications that leverage the client-side AI frameworks, such as TensorFlow.js, to support fluidly continuous immersive visuals even in disconnected and intermittently connected usage scenarios.
  • Robotic process automation will become a principal development platform for AI-driven apps: Robotic process automation has been one of chief growth sectors in the software market over the past year. As an enabler for developing automation apps that emulate how people carry out myriad tasks, RPA has become a principal use case for AI in the workplace. Though traditionally used in RPA to infer application logic from artifacts that are externally accessible, AI’s role has expanded to enable creation of intelligent bots for business process automation. In 2019, we’ll see a growing role for AI in RPA to enable development of bots that can be orchestrated as microservices across Kubernetes environments. Through the adoption of cloud-native interfaces, RPA vendors will be able to address more IoT, edge, and multicloud opportunities.
  • AI-augmented programming tools will make developers more productive: Software developers have long used automated code generation tools to lighten the load. Augmented programming refers to the next-generation of “no code,” “low code,” and other approaches for automating coding and other development tasks. In 2019, we expect to see more of these tools incorporate abstraction layers that allow

    developers to write declarative business logic that is then translated by tools into procedural programming code. In addition, more augmented programming tools will incorporate AI to generate code, by means of machine learning algorithms that have been trained on human-developed codebases maintained in GitHub and other repositories. More of these AI-augmented programming tools will rely on embedded graph models and leverage reinforcement learning to compile declarative specifications into code modules that are automatically built, trained, and refined to achieve the intended programming outcomes.

  • Conversational user interfaces will grow less chatty but more useful: Chatbots have been a growing focus for application developers over the past several years. They’ve entered the consumer IoT and mobility arenas through Amazon Alexa, Google Assistant, and similar voice-activated appliance initiatives, while also finding their way into bot-powered text chat features in more enterprise applications. In 2019, we’ll see developers tap into sophisticated AI-powered digital assistant platforms such as Google Duplex to enable chatbots to automate more tasks predictively, thereby becoming paradoxically less chatty but more productive.

  • Digital wellness will become a key mobile-app usability criterion: Users’ growing dependency on devices is undeniable, and it’s beginning to impact how developers approach building mobile applications. Though no one seriously believes that the average user will rely on their devices any less in the future, there is a growing repertoire of mobile application features—such as predictive automation of routine tasks and context-adaptive suppression of distracting notifications–that can help users unglue their frantic eyeballs from their smartphones now and then. Google’s emphasis on “digital wellness” features in its new Android 9 Pie operating system signals that we’ve entered a new era in mobile application development. In 2019, mobile application developers will leverage the predictive, adaptive, contextual and other usability features in this and other mobile platforms to help users stay sane, focused, and productive amid the growing glut of mobile devices in their lives.

 

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Phoronix

 

While Windows users last week were greeted by the Radeon Software Adrenalin 2019 driver on the Linux side was the Radeon Software for Linux 18.50 release. The only listed public change for this 18.50 Linux hybrid driver build was RHEL 7.6 support, but I’ve since been able to test and confirm that the Radeon RX 590 is working with this new Linux driver package. As a result, here is a look at the Radeon RX 590 performance from this “AMDGPU-PRO” driver build compared to the latest open-source driver stack in the form of Linux 4.20 with Mesa 19.0-devel.

This article is offering an initial look at how the Radeon RX 590 graphics card performs between these two different AMD Linux graphics driver options. Radeon Software for Linux 18.50 is the first release with this RX 590 support due to the few AMDGPU kernel patches needed for getting this newest Polaris variant working out on Linux. Those RX 590 AMDGPU patches are in the process of landing for the Linux 4.20 mainline kernel.

When benchmarking the “PRO” 18.50 OpenGL/Vulkan driver components to the fully open-source alternative, Mesa 19.0-devel was used by the Padoka PPA from this Xubuntu 18.04 test box. No hardware changes were made between the different test driver configurations.

Using the Phoronix Test Suite, a variety of OpenGL and Vulkan Linux gaming benchmarks were carried out with the Sapphire Radeon RX 590 on both of the drivers.

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Credits : Windpowerengineering

 

AnalySwift, a provider of efficient high-fidelity modeling software for composites and other advanced materials, announced the launch of its Academic Partner Program, through which it will offer universities no-cost licenses for academic research.

“We have always been close to the academic community, where both the SwiftComp and VABS software programs originated,” said Allan Wood, President & CEO of AnalySwift. “Our Academic Partner Program honors that tradition and broadens university access to cutting-edge simulation tools.”

Academic licenses of VABS and SwiftComp have always been available to universities for purchase, but the new program offers the licenses at no cost.

“Engineering faculty and students can benefit greatly from the full versions of the programs,” said Dr. Wenbin Yu, CTO of AnalySwift. “These are tools being used in industry to model complex, real composites including wind turbine and helicopter rotor blades, deployable space structures made from high-strain composites (HSC).”

The composite simulation programs are typically used in aerospace and mechanical engineering programs, such as for wind-turbine blades, with emerging applications in other areas.

“Since 2014, VABS has become our method of choice for rotor-blade structural design and optimization at our institute,” explained PhD student Tobias Pflumm at the Technical University of Munich. “With its help, we have successfully designed, tested and manufactured the rotor blades of our Autonomous Rotorcraft for Extreme Altitudes or AREA. We are currently using VABS extensively within a multi-disciplinary design environment to quantify uncertainties in the rotor blade design process.”

Inaugural members of the Academic Partner Program include the University of British Columbia (Composites Research Network), Technical University of Munich (Institute of Helicopter Technology), and Carleton University (Rotorcraft Research Group).

This article is shared by www.itechscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.