Author: Alexander

  • Magic project builds on WordPress with package managers and a file

    Today I want to share with the audience of Habra my approach to the organization of the automated project build on WordPress, which saves time when creating new sites.

    Precondition

    So, you make websites on WordPress, and with each new project  you have to go to wordpress.org to download from there, in fact, WordPress itself + set of plug-ins that you use constantly. Or install the plugins you do right from the admin panel, or worse — copying them from the directory of the previous site. I always hated it, something is not elegant or does not satisfy the aesthetic needs. Besides, it takes a little bit, but all the time. So I wondered how to improve this process. Download everything I need, folded neatly in file and executed a “git init” and “git push”. Well, now I have a repository on Bitbucket, where is my build of WP with everything I need. From this point on in the beginning of the development process, you can run “git clone” and get something ready. Method pleased me not long – revealed “shortcomings”. Namely:

    • excessive use of the repository (contains all the source code of the plugins and CMS);
    • always keep old version of all (of course ,you can periodically update, but too lazy);
    • I want to store in the same repository the source code to SCSS/SASS/LESS, not a minified JS code and other important components, which in theory should not interfere with the production version of the project;

    Then I and my laziness consulted and came to the conclusion that at the beginning of work on a new website we are willing to expend energy for no more than the input one (max two) console commands for organizing of everything and go directly to the development process. Then laziness apart from me thought and continued: “and that in Git stored all at once, and that did not have a new version to roll (originally the new should be), and that it was possible on the server to pull correctly perform (you then is to serve all) and that to do all work itself, and do it quickly, while I rest.”

    Satisfied laziness wishlist

    Initially, I formalized the problem in a short list:

    1. automate the installation of WordPress core, and current versions of plug-ins wandering from project to project;
    2. to realize the dependence of the settings of the project from server environment;
    3. separate the source code of the client part of the project;
    4. automate builds of client-side;
    5. create not redundant storage in Git-repository.

    And I started to implement. First, I went to read the WP documentation and found there a beautiful thing that allows you to separate the CMS core from what changes the developer. Sketched on that occasion the following project structure:

    content/
    wp/
    index.php
    wp-config.php

    In the directory “wp” keep the core files of WordPress, “content” folder for themes, plugins, language versions, etc. “wp-config.php” — a standard file of settings of WP, and in “index.php” guided by the documentation I put the following:

    define('WP_USE_THEMES', true);
    
    require( dirname( __FILE__ ) . '/wp/wp-blog-header.php' );

    Launched on the server, checked, OK, it works. Now we need to do so that would be downloading the latest version of WP. For this I used Composer (how to install it, you can read here). All the files that I created earlier, I put in the folder “app”, in order to make all service files level up from the executable “index.php”. In the future my site will be run from this directory (remember to modify the host configuration for your server). And the folder “wp” was cleaned of all content. In project root I have placed the file “composer.json” with the following content:

    {
    
    "require": {
    
    "php": ">=5.4",
    
    "johnpbloch/wordpress": "*",
    
    },
    
    "extra": {
    
    "wordpress-install-dir": "app/wp",
    
    }
    
    }
    
    

    “johnpbloch/wordpress” – fork WP, suitable for installation via Composer, and “wordpress-install-dir” specifies the directory of the core installation of the CMS. Writing in the console:

    composer install

    I made sure that everything works. Fresh WordPress download in the “app/wp”. What about the plugins? With them everything is fine, thanks to the project wpackagist.org they also can pull through Composer. To do this, just modify the “composer.json”:

    
    {
      "repositories":[
        {
          "type":"composer",
          "url":"https://wpackagist.org"
        }
      ],
      "require": {
        "php": ">=5.4",
        "johnpbloch/wordpress": "*",
        "wpackagist-plugin/rus-to-lat-advanced": "*",
        "wpackagist-plugin/advanced-custom-fields": "*",
        "wpackagist-plugin/all-in-one-seo-pack": "*",
        "wpackagist-plugin/google-sitemap-generator": "*",
        "wpackagist-plugin/contact-form-7": "*",
        "wpackagist-plugin/woocommerce": "*",
        "wpackagist-plugin/saphali-woocommerce-lite": "*"
      },
      "extra": {
        "wordpress-install-dir": "app/wp",
        "installer-paths": {
          "app/content/plugins/{$name}/": ["vendor:wpackagist-plugin"],
          "app/content/themes/{$name}/": ["vendor:wpackagist-theme"]
        }
      }
    }
    
    

    In the section “repositories” specified the address “wpackagist”, in the section “installer-paths” specified the path where to install plugins and themes, and in the section “require” added the names of WP-plugins as “wpackagist-plugin/{{plugin_name}}”. In “wpackagist” available almost all plugins with wordpress.org, the availability of plugins you can look in the search on the site wpackagist.org.

    Completed:

    composer update 

    saw in the directory “app/content/plugins” had all the required plugins. Now we have to deal with the settings, let me remind you that the goal is to make the settings of the database and debug dependent on development environment, one on the local server and at the working others. To do this, squeeze them into a separate file “local-config.php”:

    
    define( 'DB_NAME', '%%DB_NAME%%' );
    define( 'DB_USER', '%%DB_USER%%' );
    define( 'DB_PASSWORD', '%%DB_PASSWORD%%' );
    define( 'DB_HOST', '%%DB_HOST%%' ); // Probably 'localhost'
    
    ini_set( 'display_errors', true );
    define( 'WP_DEBUG_DISPLAY', true );
    
    define( 'AUTH_KEY',         'put your unique phrase here' );
    define( 'SECURE_AUTH_KEY',  'put your unique phrase here' );
    define( 'LOGGED_IN_KEY',    'put your unique phrase here' );
    define( 'NONCE_KEY',        'put your unique phrase here' );
    define( 'AUTH_SALT',        'put your unique phrase here' );
    define( 'SECURE_AUTH_SALT', 'put your unique phrase here' );
    define( 'LOGGED_IN_SALT',   'put your unique phrase here' );
    define( 'NONCE_SALT',       'put your unique phrase here' );
    
    

    and change “wp-config.php” in the following way:

    
    if ( file_exists( dirname( __FILE__ ) . '/local-config.php' ) ) {
    	define( 'WP_LOCAL_DEV', true );
    	include( dirname( __FILE__ ) . '/local-config.php' );
    } else {
    	define( 'WP_LOCAL_DEV', false );
    	define( 'DB_NAME', '%%DB_NAME%%' );
    	define( 'DB_USER', '%%DB_USER%%' );
    	define( 'DB_PASSWORD', '%%DB_PASSWORD%%' );
    	define( 'DB_HOST', '%%DB_HOST%%' ); // Probably 'localhost'
    
    	ini_set( 'display_errors', 0 );
    	define( 'WP_DEBUG_DISPLAY', false );
    
    	define( 'AUTH_KEY',         'put your unique phrase here' );
    	define( 'SECURE_AUTH_KEY',  'put your unique phrase here' );
    	define( 'LOGGED_IN_KEY',    'put your unique phrase here' );
    	define( 'NONCE_KEY',        'put your unique phrase here' );
    	define( 'AUTH_SALT',        'put your unique phrase here' );
    	define( 'SECURE_AUTH_SALT', 'put your unique phrase here' );
    	define( 'LOGGED_IN_SALT',   'put your unique phrase here' );
    	define( 'NONCE_SALT',       'put your unique phrase here' );
    }
    
    

    Now, if there is a file “local-config.php” settings will be picked up from it. This file needs to add in “.gitignor” (why do we need passwords from the database in the repository?). It’s time to enter data to access database “local-config.php” to launch the installation procedure of WordPress and visit the admin area.

    In the admin area you need to visit “Settings -> General” and there to fix the address, as follows:

    WordPress address with “/wp” at the end, website address without the “/wp”.
    Great, you can use the site. The next step I dedicated custom styles and scripts (and somehow not logical, on the server everything is going, and all sorts of jquery to manually download?). In preparation I edited the project structure:

    
    app/
           content/
                 theme/
                      mytheme/
                            build/
                            index.php
                            style.css
           wp/
           index.php
           local-config.php
           wp-config.php
    src/
          fonts/
          js/
                main.js
          scss/
                style.ccss 
    composer.json
    
    

    The original font files, scripts and styles are stored in the folder “src/”. Next they’re going with gulp, minified and put in the folder “app/content/theme/mytheme/build”. As the preprocessor for CSS I use SCSS (how to install, I think we all know, but if not, here are instructions), to build JS — browserify. I considered logical that based on client side you need to pull up with nmp. The file “package.json” I get this:

    
    {
      "devDependencies": {
        "bourbon": "*",
        "bourbon-neat": "*",
        "browserify": "*",
        "fullpage.js": "*",
        "gulp": "*",
        "gulp-clean-css": "*",
        "gulp-concat": "*",
        "gulp-sass": "*",
        "gulp-sourcemaps": "*",
        "gulp-uglify": "*",
        "jquery": "*",
        "normalize-scss": "*",
        "vinyl-source-stream": "*"
      }
    }
    
    

    Section except “devDependencies”, did not fill, because to publish it in npm I obviously do not plan to. I write in the console:

    
    npm install
    
    

    I wait few minutes and see that all these dependencies carefully appeared in “node_modules”. The icing on the cake was the file “gulpfile.js” with this content:

    
    'use strict';
    
    var browserify = require('browserify'),
        source = require('vinyl-source-stream'),
        gulp = require('gulp'),
        sass = require('gulp-sass'),
        uglify = require('gulp-uglify'),
        cleanCSS = require('gulp-clean-css'),
        sourcemaps = require('gulp-sourcemaps'),
        sourcePath = './src/',
        buildPath = './app/content/themes/mytheme/build/';
    
    //scss
    gulp.task('scss', function () {
        return gulp.src('./src/scss/style.scss')
            .pipe(sass().on('error', sass.logError))
            .pipe(gulp.dest(buildPath + 'css'));
    });
    
    gulp.task('scss:watch', function () {
        return gulp.watch(sourcePath + 'scss/**/*.scss', ['scss']);
    });
    
    //js
    gulp.task('browserify', function() {
        return browserify(sourcePath + 'js/main.js')
            .bundle()
            .pipe(source('main.js'))
            .pipe(gulp.dest(buildPath + 'js'));
    });
    
    gulp.task('browserify:watch', function () {
        return gulp.watch(sourcePath + 'js/**/*.js', ['browserify']);
    });
    
    //fonts
    gulp.task('copy:fonts', function () {
        gulp.src(sourcePath + 'fonts/**/*', {base: sourcePath + 'fonts'})
            .pipe(gulp.dest(buildPath + 'fonts'));
    });
    
    //minify
    gulp.task('minify:js', ['browserify'], function(){
        return gulp.src(buildPath + 'js/*.js')
            .pipe(sourcemaps.init())
            .pipe(uglify())
            .pipe(sourcemaps.write())
            .pipe(gulp.dest(buildPath + 'js'))
    });
    
    gulp.task('minify:css', ['scss'], function(){
        return gulp.src(buildPath + 'css/*.css')
            .pipe(cleanCSS({compatibility: 'ie9'}))
            .pipe(gulp.dest(buildPath + 'css'));
    });
    
    
    //task groups
    gulp.task('default', ['copy:fonts', 'scss', 'browserify']);
    gulp.task('watch', ['copy:fonts', 'scss:watch', 'browserify:watch']);
    gulp.task('production', ['copy:fonts', 'scss', 'browserify', 'minify:js', 'minify:css']);
    
    

    The command “gulp” will copy the fonts, compile SCSS, JS glued and folded it all neatly in a folder build. “gulp watch” does the same, but at each change of file. “gulp production” additionally clean the file from comments and minified it.

    What is the result?

    In the end, you don’t need to repeat the above. I conveniently have uploaded all on GitHub: https://github.com/IvanZhuck/kosher_wp_seeder.

    You must clone the repository and run the following commands (after adjusting the list of plugins and dependencies, if necessary):

    composer install npm install

    Me and my laziness satisfied, the projects are to start faster and work better.

    This text is a translation of the article “Волшебная сборка проекта на WordPress при помощи пакетных менеджеров и напильника”  published by @ivan_zhuck on habrahabr.ru.

    About the CleanTalk service

    CleanTalk is a cloud service to protect websites from spam bots. CleanTalk uses protection methods that are invisible to the visitors of the website. This allows you to abandon the methods of protection that require the user to prove that he is a human (captcha, question-answer etc.).

  • DDoS on 600 GB/s as the democratization of censorship

    DDoS on 600 GB/s as the democratization of censorship

    Well-known American journalist Brian Krebs for a long time writes on the topics of information security, revealing the identity of dark speculators mainly from Eastern Europe. Over the years, Brian had to over pass through a lot. Evil Ukrainian hacker has gathered on the forums for two bitcoins to buy heroin and send it to Krebs by post, other hackers have sent a SWAT team into the house on call 911 supposedly his number, took out a loan for $20 thousand to his name, has transferred $1000 to his Paypal account with stolen payment card. The authors of malicious software mention Brian Krebs even in the code of their programs. What can we do, these are the costs of the work of journalists in the field of information security.

    Now Krebs has been targeted with new attacks. This time the attackers organized the most powerful DDoS-attack 600 Gbps on the website KrebsOnSecurity.com. A few days later the company Akamai gave up. To protect other customers, it brought out KrebsOnSecurity.com from under its protection.

    The attack began on the evening of Tuesday September 20. Initially, it had no effect thanks to the operational work of Akamai engineers. Traffic was filtered out, but experts Akamai have admitted that this attack was almost twice as powerful as the biggest DDoS ‘ and what they saw in life. And probably one of the biggest in the history of the Internet in general.

    September 20 at 20:00 the flow of garbage traffic reached 620 GB/s. This is more than enough to drop any website. Up to this maximum DDoS -ttack on Akamai resources was 363 Gbit/s.

    DDoS was not organized by the standard method with amplification of queries through DNS servers. Instead, most of the traffic consisted of packets of data generic routing encapsulation (GRE). Communication protocol GRE is used to establish direct P2-connections between network nodes. Such a large amount of traffic surprised the experts – it is not entirely clear hot the amplification is carried out. If amplification was not, it turns out that the attacker used to attack hundreds of thousands of infected machines. It’s some kind of record botnet. Perhaps it consists of IoT devices such as routers, IP-cameras and digital consoles (DVR).

    Brian Krebs is not offended by Akamai. For four years they are many times together with a subsidiary firm Prolexic protect it from DDoS-attacks. Just the current DDoS was too large. When it became obvious that the attack will affect other customers, the company Akamai in advance on September 21 at 16:00 warned Brian Krebs that he has two hours to go to another network, and at 18:00 they remove the protection.

    The company’s management later explained that otherwise the reflection of such an attack would cause them loss of millions of dollars. Perhaps the head is a bit exaggerated, but in fact protect against attacks of this scale really worth from $100 thousand to $150 thousand per year. They always defended Krebs for free.

    In order not to fail their host, the journalist asked to redirect all traffic to 127.0.0.1, and he tried to use the services of Project Shield — Google’s charity project, designed specifically to protect journalists from DDoS attacks. It turned out that this is ideal, so that on 25 September the site was back online and still works flawlessly.

    These events pushed Brian Krebs to philosophical thoughts about the nature of Internet censorship. He recalls the famous words of businessman and libertarian John Gilmore about the impossibility of censoring the Internet. Gilmore said: “the Network recognizes censorship as damage and avoids it.” Those are some great words that have been repeatedly confirmed by life. Even now in Russia can be clearly seen how ineffective censorship of the Internet. Attempts of Roskomnadzor and other censors to block specific network resources really perceives as damage to the integrity of its structure, as an anomaly in normal operation — and offers options to work around this anomaly.

    But this principle applies only in the case of “political” censorship, which is traditionally implemented by governments of different countries, limiting free access of its citizens to information.

    In the case of a DDoS-attack, we see another example of an attempt to “gag” an opponent, to silence him. Here the state is not involved. Censorship is implemented by the coordinated efforts of many people or bots. In this sense, we can say that a DDoS-attack is a “democratic” version of censorship when the majority imposes its will on the minority and silences the opponent (of course, to a true democracy, such actions are irrelevant).

    Brian Krebs believes that currently the greatest threat of censorship are just not the toothless attempts by state officials to ban something on the Internet (officials still understand absolutely nothing about technology and are not capable of inflicting significant damage), and namely acts of experienced professionals. Underground hacker community in recent years quietly turned into a powerful transnational organization, in whose hands is concentrated the enormous computer resources. These resources under certain conditions can turn into cyber weapon.

    It is difficult to imagine that the government of any country could organize a DDoS-attack with a capacity of 600 GB/s, it’s incredible. But transnational hacker community — can. In this sense, Brian Krebs speaks of “the democratization of censorship”.

     

    This text is a translation of the article “DDoS на 600 Гбит/с как демократизация цензуры”  published by @alizar on habrahabr.ru.

    About the CleanTalk service

    CleanTalk is a cloud service to protect websites from spam bots. CleanTalk uses protection methods that are invisible to the visitors of the website. This allows you to abandon the methods of protection that require the user to prove that he is a human (captcha, question-answer etc.).

  • Vulnerabilities of CCTV systems allow hackers to create massive botnets

    Vulnerabilities of CCTV systems allow hackers to create massive botnets

    According to a statement from US-CERT, in the firmware of digital video products (DVR) AVer Information EH6108H+ found serious vulnerabilities that could allow attackers to easily get to them with remote access and even to form botnets.

    Vulnerabilities

    Security researchers have found three critical vulnerabilities. The first (CVE-2016-6535) is the presence of two hidden accounts to connect remotely. Each of them has root-rights, the password to access code written in the firmware — as a result, accounts cannot be disabled or removed from the system. As a result, an attacker who knows the IP specific camera can easily connect to it by Telnet.

    In addition, attackers can gain access to the admin panel and all without administrator passwords through an error in the authentication system (CVE-2016-6536).  To access the control panel, the hacker just need to go to the address [IP-device]/setup and choose the option “handle” — then the administrative page opens without a password. To access it, an attacker can change the device settings and even change the passwords for all users of the system.

    The third vulnerability (CVE-2016-6537) leads to the disclosure of confidential information — the problem occurs because of an error in the mechanism of processing user credentials.

    How to be protected

    According to a statement from US-CERT, at the moment there are no patches to fix discovered vulnerabilities. Manufacturer of AVer firmware on its website describes it as “no longer supported” (discontinued).

    The only effective way to prevent the attack using these holes is to limit access to devices through a firewall or network hardware setup.

    The extent of the problem

    The presence of simple-to-use vulnerabilities and “backdoors” in DVR devices is not news. Previously, Positive Technologies experts have found critical vulnerabilities and the so-called “master passwords” that allow attackers to easily get access to these devices, hundreds of thousands of which are available from the Internet. For example, problems have been found in video surveillance systems Samsung, as well as popular firmware DVR-systems used by many vendors.

    Also, not so long ago it became known that the worm BASHLITE were infected more than 1 million DVR devices — attackers formed them into botnets for DDoS attacks.

    Also earlier this year, researchers from the company Sucuri found the botnet of 25,000 Internet connected devices for video surveillance. In addition, the botnet to conduct DDoS attacks, consisting of infected Webcams was found by specialists from the Security Engineering and Response Team of the company Arbor (ASERT).

    It is important to understand that the attackers often do not need to apply a much effort to detect gaps in the protection of surveillance systems, because, as a rule, they contain the vulnerabilities which are very primitive.

    The situation is aggravated by the fact that the manufacturers of DVR-system often not themselves fully create firmware for their devices, and use third-party development. Such firmware can be distributed in various dubious ways, potentially, they may contain a hidden undocumented logic, about which manufacturers of the final DVR cannot know nothing at all.

    For example, our experts discovered vulnerabilities present in the popular firmware, which was used in its own way and complements many of the DVR manufacturers. Accordingly, vulnerabilities in these firmwares endanger a lot of different devices from different manufacturers.

    However, many manufacturers do not pay enough attention to release updates and develop mechanisms to centrally deploy them on end-devices or user notifications. In the case of using firmware third party, the remediation process becomes more complicated: in such cases, the manufacturer of the DVR cannot fully control the firmware and not be able to change it.

    For example, with one of the producers of such a popular and vulnerable firmware, we have not been able to establish contact, so they can correct any problems found. More detailed information was provided in the report at the forum Positive Hack Days III:

    Vulnerabilities and hacking DVR devices are a serious threat to private companies. With access to the CCTV system, the attacker can use them as a springboard for further attacks invisible within the network of the company (APT). The typical remedies that are used in companies are often unable to detect such penetration (e.g., the classic antivirus approach is powerless here).

    In fact, in the corporate network appears malicious device – a minicomputer, inside of which an attacker could install their software. Backdoor in such devices can be very long and imperceptible to exist.

    What to do

    In order to protect themselves, experts Positive Technologies advise to isolate access to digital video systems from the Internet (for example, the settings of the router and/or firewall). It is desirable for devices from the internal network to limit access to the DVR and give access to only those addresses, which it definitely needed (e.g., administrators only). And similarly to limit the network access of the DVR, giving him access only to the desired locations. It is best to place these devices in a separate isolated network.

    In general, with the development of “Internet of things” opportunities for the creation of such botnets increase significantly, many new gadgets are developed and delivered to market without any regard for safety (on the contrary: connection schemes to the Internet are simplified as much as possible). In this situation, we can advise private individuals and companies to be more selective in the purchase of equipment and to carry out the security analysis of new devices.

    Identifying botnets and investigating incidents is also more complicated when the infected are not personal computers, and many automated systems, the behavior of which no one is watching.

    This text is a translation of the article “Уязвимости систем видеонаблюдения позволяют хакерам создавать масштабные ботнеты”  published by @ptsecurity on habrahabr.ru.

    About the CleanTalk service

    CleanTalk is a cloud service to protect websites from spam bots. CleanTalk uses protection methods that are invisible to the visitors of the website. This allows you to abandon the methods of protection that require the user to prove that he is a human (captcha, question-answer etc.).

  • Breeding Business: from ordinary blog to extraordinary magazine

    Geek at heart, I always have been coding littles projects on localhost and a few failing websites. I guess I never really took Internet seriously.

    Then, I realized these jobs I was doing in luxury hospitality were not making me happy. I just loved coming back home and writing, developing and designing. It’s just what I love. So I started looking at opportunities to generate a very small income that could make a website sustainable. And I had zero money to invest.

    Over the last years, WordPress and blogging have been a huge hit and a lot of people go for it. They think about the monetization before having thought of their content, I took it the other way around.

    Why Blogging About Dog Breeding?

    When I set my mind to start an online blog, I looked at the usual ways of finding the perfect “keyword”, “topic”, “niche”. These include Google Keyword Planner, Google Trends and some paying softwares. I managed to have three topics that seemingly were searched for and that I was happy to write posts on.

    Then, I picked the best topics and started writing. And this is when I realized I couldn’t write on anything else than what I truly loved — responsible and ethical dog breeding. I was writing one article after another. It just felt right.

    Breeding dogs is something that has been running through several generations in my family and although I haven’t done it extensively myself, I am passionate by the canine genetics and mechanisms that make you have the best bloodline of all.

    Dog breeding is a passion of mine and it would be hard for me not to write about it.

    What Is Breeding Business?

    Breeding Business was born after I wrote a few articles. I was going on Facebook Groups at the time to promote my articles (and eventually got suspended!) because Google wasn’t sending me enough traffic at first.

    The website consists of a lot of articles written and published in different categories: how-to’s, interviews of breeders, reviews of dog breeding supplies, and obviously in-depth articles on how to breed dogs.

    After just a few weeks, some visitors started asking what books were we recommending. Unfortunately most books are either too narrow in their topics or too breed-specific. A dog is a dog and the principles remain the same for a Chihuahua or a Rottweiler.

    Therefore, we created our very own ebook, The Dog Breeder’s Handbook. It was created on iBooks Author since it’s a free application built by Apple and at the time, I didn’t know if the ebook was going to be a hit, or a miss. I like to be in motion, try things and if they fail, move on to the next one.

    The Dog Breeder’s Handbook offers all the theoretical knowledge dog breeders need and a lot of actionable tips for them to put into practice. Yet, the launch was slow because the traffic was low. It was definitely generating a few hundred dollars every month. This is what kept me going and made me believe in it even more.

    From then on, I thought I was going to add another product many visitors were hinting at: a WordPress plugin for dog breeders. I built it in few weeks and it is today a very good seller. I release updates using the feedback loop and have a similar project to be released soon.

    Challenges When Growing a Simple Blog Into an Online Magazine

    Being alone and seeing the traffic (and revenue) growing, questions start to pop in your mind.

    It’s time for some business decisions

    A blogger and solo-entrepreneur always strives for steady growth. I do not identify myself with mega-growth startups we read about everywhere. To each their own!

    With Breeding Business, the growth has been great especially since Google sent traffic our way. No specific strategy that we followed, we just put out great content. Often.

    Yet, we’re still asking ourselves a million of questions…

    • Should I add another product or should I focus and grow these?
    • Communities around blogs are hype, should I make one?
    • Is the traffic growth normal or too slow?
    • Subscriptions are so popular these days, but what to offer?

    These are business decisions to make. I added another product: a course. It never took off mainly because it was kind of duplication what was in the ebook. We’re thinking a new use for courses for the future because I could see people were interested.

    Communities are great but there is nothing worse than a dead forum so we never took that risk and are waiting to have a bigger email list to perhaps one day launch a community. Subscriptions are great but just not for us right now. A lot of blogs start charging a monthly or yearly fee for members to be part of a special club but most of them see a huge churn and give that model up after a few months.

    Growth requires a technical overhaul, too

    Our traffic has been growing very well thanks to search engines. This is why we needed a quality anti-spam and CleanTalk has been doing a sublime job at keeping these fake user accounts and comments away.

    With traffic growth comes a whole new set of interrogations:

    • Why am I not converting more visitors into optins or customers?
    • GTmetrix and page speed tests are giving me low scores, how can I optimize my website?
    • Why so many people read one article and leave?

    These are technical issues that truly take time to be fixed. There are mainly two ways we could tackle these:

    1. Patch each little issue one by one
    2. Build a brand new website from scratch with these issues factored in

    After a few months, we were patching issues one by one but today, I am almost finished with a brand new version of the website to be released in two or three months after extensive testing. We’re also pairing that new website with a move from cloud hosting to a VPS (ten folding the monthly hosting cost…)

    Restructure the tree of information

    Our current website was up and running when we had around 20-30 articles. We have over 300 articles today. People aren’t visiting other pages because the information is badly structured and they can’t find their way around.

    Categories are being completely revamped. Stuff we thought was going to attract a lot of people, ended up being a graveyard and vice versa. So we’re cleaning the way the posts are categorized and tagged while updating old pages as well.

    Speed and page load

    Google is apparently using your website’s loading speed as a signal to decide on your ranking. My website is currently performing very poorly in terms of page load speed.

    And these results are after several fixes here and there. So it’s the second main focus for the update. We’re also making sure the website loads much much faster on mobile devices thanks to wp_is_mobile(), the WordPress function to detect mobile devices. We load lower-quality images, less widgets.

    Another WordPress optimisation is the use of the Transients API for our most repeated and complicated queries such as our top menu, footer, home queries, related posts, etc. The way it works is simple and allows you to store cached data in the database temporarily. Instead of retrieving the full menu at each page load, using a transient only requires a single database call for the menu to be fetched.

    Add new UX features

    The new version of Breeding Business brings its own set of new UX features. More AJAX calls, less page refreshes. More white spaces and an easier scroll through our entire page. We’ve also decluttered the article’s footer so our calls to action can jump to my visitors’ eyes.

    Conclusion is… One man can only do so much!

    Everything is wrote here is what I do daily. Article writing, support emails, plugin updates, website updates, email outreach, designing illustrations, social media promotions, bookkeeping and accounting, strategizing and long-term planning, etc. And I’m not helping myself by adding a new recurring item to our new upcoming version: biweekly giveaways!

    Over the last weeks, I realized how stupid it is to rely on your own self only. It’s self-destructive and counterproductive. I genuinely believe that delegating any of these tasks will result in a loss of quality and will cost me money.

    Yet, I have to leave my ego at the door and put some faith in other people. Sure, I may work with some disappointing people at first but it is also my duty to teach them how I want them to work.

    This is my focus for 2017 — learn how to surround myself with the right people (or person) to free some time for me to focus on what I do best.

     

    About the author

    Lazhar is the founder of Breeding Business, a free online magazine educating responsible dog breeders all around the world through in-depth dog breeding articles, interviews, ebooks and comprehensive guides.

  • What is AMP (Accelerated Mobile Pages)? How to setup CleanTalk for AMP

    What is AMP?

    Accelerated Mobile Pages — it’s the tool for static content web-page creation with almost instant load for mobile devices. It consists of three parts:

    1. AMP HTML — it’s HTML with limitations for reliable performance and some extensions for building rich content.
    2. AMP JS — is library which ensures the fast rendering of pages. Third-party JavaScripts are forbidden.
    3. Google AMP Cache — is a proxy-based content delivery network for delivering all valid AMP documents.  It fetches AMP HTML pages, caches and improves page performance automatically.

    Advantages

    • Lightweight version of standard web-pages with high speed load.
    • Instant multimedia content load: videos, animations, graphics.
    • Identical encoding — the same fast rendered website content on different devices.
    • AMP project is open source, it enables free information sharing and ideas contribution.
    • Possible advantage in SEO as page load speed is one of the ranking factors.
    • There are plugins for popular CMS to make AMP usage easier in your website.

    How to use it in WordPress

    When you choose what AMP plugin to use keep in mind the following:

    — Integration with SEO plugin for attaching corresponding metadata.

    — Analytics gathering with traffic tracking of your AMP page.

    — Displaying ads if you are a publisher.

    Available plugins in the WordPress catalog:

    1. AMP by Automattic
    2. Facebook Instant Articles & Google AMP Pages by PageFrog
    3. AMP – Accelerated Mobile Pages
    4. AMP Supremacy
    5. Custom AMP (requires installed AMP by Automattic)

    As example let’s install and activate AMP by Automattic and create a new post with multimedia content. Please, take note that not page but post. Pages and archives are not currently supported.

    AMP by Automattic plugin converts your post into accelerated version of the post automatically and you don’t have to duplicate by yourself. Just add /amp/ (or ?amp=1) to the end of your link and that would be enough.

    How to setup CleanTalk for AMP

    Please, make sure that the option “Use AJAX for JavaScript check” is disabled as it will prevent regular JavaScript execution.

    The option is here:

    WordPress Admin Page —> Settings —> CleanTalk and uncheck SpamFireWall.  

    Then, click on Advanced settings —> disable “Use AJAX for JavaScript check” —> Save Changes.

    Other options will not interrupt AMP post functioning. The CleanTalk Anti-Spam plugin will protect all data sending fields that were rendered after the conversion.

    For now, most AMP plugins remove the possibility to comments and send contact form data on accelerated pages.

    Google validation

    Now you need to validate your website structured data using the tool “Google Validator”:

    https://search.google.com/structured-data/testing-tool/

    If you don’t do this a search bot will not simply pay its attention to your post and no one will see it in the search results.

    Copy and paste the link to your AMP post and see the result. Fix the problems you will be pointed at.

    After that your AMP version of the post will be ready to use.

    Links

    AMP project:
    https://www.ampproject.org/

    AMP blog:
    https://amphtml.wordpress.com/

    AMP plugins in the WordPress catalog:
    https://wordpress.org/plugins/search.php?q=AMP

    Google Search recommendations of how to create accelerated mobile pages:
    https://support.google.com/webmasters/answer/6340290?hl=en

  • How to protect a Linux system: 10 tips

    How to protect a Linux system: 10 tips

    At the annual LinuxCon conference in 2015 the Creator of the GNU/Linux core Linus Torvalds has shared his opinion about the safety of the system. He stressed the need to mitigate the effect of the presence of certain bugs by competent protection order in violation of one component to the next layer overlaps the problem.

    In this article we will try to uncover this subject from a practical point of view:

    • start with the presets and recommendations for choosing and installing Linux distributions;
    • then talk about simple and effective item of protection — security update;
    • next, consider how to set restrictions for programs and users.
    • how to secure the connection to the server via SSH;
    • we give some examples of configuring firewall and limit unwanted traffic;
    • in the concluding part will explain how to disable unnecessary programs and services, as further to protect the servers from intruders.
    1. To configure the environment preloading before installing Linux

    Take care of the security of the system is necessary before installing Linux. Here is a set of recommendations for the settings of the computer, which should be considered and executed before the installation of the operating system:

    • Booting in UEFI mode (not legacy BIOS –a sub-section of it below)
    • Set a password on the UEFI setup
    • Activate SecureBoot mode
    • Set a password on UEFI level to boot the system
    1. Select the appropriate Linux distribution

    Most likely, you will choose popular distributions — Fedora, Ubuntu, Arch, Debian, or other similar branches. In any case, you need to consider the obligatory presence of these functions:

    • Support of forced (MAC) and role-based access control (RBAC): SELinux/AppArmor/GrSecurity
    • Publication of security bulletins
    • Regular release of security updates
    • Cryptographic verification of packages
    • Support for UEFI and SecureBoot
    • Support of full native disk encryption

    Recommendations for installing distributions

    All distributions are different, but there are moments that are worth to pay attention and perform:

    • Use full disk encryption (LUKS) with reliable key phrase
    • The process of paging needs to be encrypted
    • Set a password for editing the boot-loader
    • Reliable password on root access
    • Use an account without the privileges, belongs to the group of administrators
    • Set for user a strong password different from the password for root
    1. Set up automatic security updates

    One of the main ways to ensure the safety of the operating system – to update the software. Updates often fix found bugs and critical vulnerabilities.

    In the case of server systems, there is the risk of failure during the upgrade, but, in our opinion, problems can be minimized if automatically install only security update.

    Auto-update works only for installed from the repositories, not compiled independently packages:

    • In Debian/Ubuntu for updates use the package unattended upgrades
    • In CentOS to auto-update use yum-cron
    • In Fedora for these purposes there is the dnf-automatic

    To upgrade, use any of the available RPM-managers of packages by commands:

    yum update

    or

    apt-get update && apt-get upgrade

    Linux can be configured to send notifications of new updates by email.

    Also , to maintain the security of the Linux core there are protective extensions, e.g. SELinux. This extension will help keep the system from incorrectly configured or dangerous programs.

    SELinux is a flexible system of forced access control, which can work simultaneously with selective access control system. Running programs are allowed to access files, sockets and other processes, and SELinux sets limits so that harmful applications are unable to break the system.

    1. Limit access to external systems

    Next after the update method of protection is to limit access to external services. For this you need to edit the file /etc/hosts.allow and /etc/hosts.deny.

    Here is an example of how to restrict access to telnet and ftp:

    In file /etc/hosts.allow:

    hosts.allow in.telnetd: 123.12.41., 126.27.18., .mydomain.name, .another.name  
    in.ftpd: 123.12.41., 126.27.18., .mydomain.name, .another.name

    Example of the above will allow you to perform telnet and ftp connections to any host in IP-classes 123.12.41.* and 126.27.18.*, and also the host with the domain mydomain.name and another.name.

    Next, in file /etc/hosts.deny’:

    hosts.deny 
    in.telnetd: ALL 
    in.ftpd: ALL

    Adding a user with limited rights

    We do not recommend to connect to the server as root user — it has the right to run any commands, even critical to the system. Therefore, it is better to create user with restricted rights and work through it. Administration can be performed through sudo (substitute user and do) – this is a temporary elevation to administrator level.

    How to create a new user:

    In Debian and Ubuntu:

    Create a user, replacing administrator with the desired name and specify the password in response to the request.  Input password characters are not displayed it the command line:

    adduser administrator

    Add the user to the sudo group:

    adduser administrator sudo

    Now you can use the prefix sudo when executing commands that require administrator rights, for example:

    sudo apt-get install htop

    In CentOS and Fedora:

    Create a user, replacing administrator with your desired name, and create a password for his account:

    useradd adminstrator && passwd administrator

    Add the user to the group wheel for the transfer of the rights sudo:

    usermod –aG wheel administrator

    Use only strong passwords — minimum of 8 letters of the different register, digits and other special characters. To search for weak passwords among users of your server, use the utilities as “John the ripper”, change the settings in file pam_cracklib.so to set passwords forcibly.

    Set the expiration period of the password with the command chage:

    chage -M 60 -m 7 -W 7 UserName

    Disable password aging with the command:

    chage -M 99999 UserName

    Find out when a user’s password will expire:

    chage -l UserName

    Also, you can edit the fields in the file /etc/shadow:

    {UserName}:{password}:{lastpasswdchanged}:{Minimum_days}:{Maximum_days}:{Warn}:{Inactive}:{Expire}:

    where

    • Minimum_days: the Minimum number of days before the expiration of the password.
    • Maximum_days: the Maximum number of days before password expiration.
    • Warn: Number of days before expiration when the user will be warned of the approaching day shift.
    • Expire: the exact date of the expiration of the login.

    Also it is necessary to limit reuse of old passwords in module pam_unix.so to set a limit on the number of failed login attempts of the user.

    To see the number of failed login attempts:

    faillog

    Unblock account after failed login:

    faillog -r -u UserName

    To lock and unlock accounts, you can use the command passwd:

    lock account
    
    passwd -l UserName
    unlocak account
    
    passwd -u UserName

    To make sure that all users set passwords with the command:

    awk -F: '($2 == "") {print}' /etc/shadow

    To block users without passwords:

    passwd -l UserName

    Make sure that the UID parameter was set to 0 only for root account. Enter this command to see all users with UID equal to 0.

    awk -F: '($3 == "0") {print}' /etc/passwd

    You should see only:

    root:x:0:0:root:/root:/bin/bash

    If there are other lines, then check whether you have installed for them UID to 0, delete unnecessary lines.

    1. Set access rights for users

    After you install the password is worth to make sure that all users have access appropriate to their rank and responsibility. In Linux you can set access permissions on files and directories. So there is the ability to create and control different levels of access for different users.

    Access categories

    Linux is based on work with multiple users, so each file belongs to one specific user. Even if the server is administered by one person for various programs created multiple accounts.

    To view users in the system with the command:

    cat /etc/passwd

    The file /etc/passwd contains a line for each user of the operating system. Under services and applications can be created separate users who will also be present in this file.

    In addition to the individual accounts there is a category of access for groups. Each file belongs to one group. One user can belong to several groups.

    View the groups to which belongs your account, use the command:

    groups

    Display a list of all groups in the system, where the first field indicates the name of the group:

    cat /etc/group

    There is a category of access “other”, if the user does not have access to the file and does not belong to the group.

    Types of access

    For categories of users there is the ability to set types of access. Usually it’s right to run, read and modify the file. In Linux, access types are marked by two types of notations: alphabetic and octal.

    In alphabetic notation, permissions are indicated by letters:

    r = reading

    w = change

    x = start

    In octal notation the level of access to files is determined by the numbers from 0 to 7, where 0 indicates no access, and 7 means full access to modify, read and execute:

    4 = read

    2 = change

    1 = start

    1. Use the keys to connect via SSH

    To connect to the host via SSH is usually used password authentication. We recommend a more secure way – input  a pair of cryptographic keys. In this case, the private key is used instead of a password, which will seriously complicate the selection by brute-force.

    For example, let’s create a key pair. Actions should be performed on the local computer, not on a remote server. In the process of key generation you can specify a password to access them. If you leave this field blank, you will not be able to use the generated keys to store them in keychain-manager of the computer.

    If you have already created the RSA keys before, then skip command generation. To check the existing keys for a start:

    ls ~/.ssh/id_rsa*

    To generate new keys:

    ssh-keygen –b 4096

    Download of the public key to the server

    Replace administrator with the name of the key owner, and 1.1.1.1 with the ip-address of your server. From the local computer, type:

    ssh-copy-id administrator@1.1.1.1

    To test the connection, disconnect and re-connect to server — login must occur with the created keys.

    Setting up SSH

    You can disable connect via SSH as root-user, and to obtain administrator rights to use sudo at the beginning of the command. On the server in the file /etc/ssh/sshd_config you need to find the parameter PermitRootLogin and set the value to no.

    You can also deny SSH connection by entering the password so that all users use keys. In the file /etc/ssh/sshd_config, set for parameter PasswordAuthentification value no. If this line doesn’t exist or it is commented out, respectively, add or uncomment it.

    In Debian or Ubuntu you can enter:

    nano /etc/ssh/sshd_config
    
    ... PasswordAuthentication no

    The connection can also additionally secure with two-factor authentication.

    1. Install firewalls

    Recently was discovered a new vulnerability, allowing to carry out DDoS attacks on servers running Linux. A bug in the core system came with version 3.6 at the end of 2012. The vulnerability allows the hackers to embed viruses into boot files, web page and open up the Tor-connection, with no need for hacking a lot of effort to make — work the IP-spoofing method.

    Maximum damage for encrypted HTTPS connection or SSH – termination of the connection, but in the unsecured traffic, the attacker can put new content, including malware. To protect against such attacks is suitable firewall.

    Block access using Firewall

    Firewall is one of the most important tools for blocking unwanted incoming traffic. We recommend you to skip only really need the traffic and fully deny all the rest.

    To filter packages in most Linux distributions there is iptables controller. Usually it is used by advanced users, and to simplify configuration, you can use utilities UFW on Debian/Ubuntu or FirewallD in Fedora.

    1. Disable unnecessary services

    Experts from the University of Virginia recommend to disable all services that you don’t use. Some background processes installed on the startup and operate to shutdown the system. To configure these programs, you need to check the initialization scripts. Starting services can be done using inetd or xinetd.

    If your system is configured with inetd, in the file /etc/inetd.conf you can edit the list of background programs “demons”, to disable startup of service enough to put in the beginning of the line the sign “#”, turning it from the executable to comment.

    If the system uses xinetd, its configuration will be in the directory /etc/xinetd.d. Every file in the directory defines a service, which can be disabled by specifying the item disable = yes, as in this example:

    service finger
    
    {
    
    socket_type = stream
    
    wait = no
    
    user = nobody
    
    server = /usr/sbin/in.fingerd
    
    disable = yes }

    Also worth checking out an  ongoing processes that are not managed by inetd or xinetd. To configure the startup scripts in the directories /etc/init.d or /etc/inittab. After done the changes, run the command under root account.

    /etc/rc.d/init.d/inet restart

    9.Protect the server physically

    It is impossible to completely defend against malicious attacks with physical access to the server. It is therefore necessary to protect the premises where your system is located. The data centers seriously monitor the safety, restrict access to servers, install security cameras and assign permanent guards.

    To enter the data center all visitors must pass certain stages of authentication. Also, it is strongly recommended to use motion sensors in all areas of the centre.

    1. To protect the server from unauthorized access

    System of unauthorized access or IDS collects data about system configuration and files, and further compares these data with the new changes to determine whether they are harmful for the system.

    For example, tools Tripwire and Aide collected a database of system files and protect them with a set of keys. Psad is used to track suspicious activity by using reports firewall.

    Bro is created for network monitoring, tracking suspicious schemes of actions, collection of statistics, perform system commands, and generating alerts. RKHunter can be used to protect from viruses, most rootkits. This utility checks your system by database of known vulnerabilities and can identify unsafe settings it applications.

    Conclusion

    The above tools and settings will help you to partially protect the system, but safety depends on your behavior and understanding of the situation. Without care, caution and constant self learning all the safety measures might not work.

    This text is a translation of the article “Как обезопасить Linux-систему: 10 советов”  published by @1cloud on habrahabr.ru.

    About the CleanTalk service

    CleanTalk is a cloud service to protect websites from spam bots. CleanTalk uses protection methods that are invisible to the visitors of the website. This allows you to abandon the methods of protection that require the user to prove that he is a human (captcha, question-answer etc.).

  • How to reduce a possibility of brute force attacks on WordPress

    How to reduce a possibility of brute force attacks on WordPress

    Until the moment when CleanTalk launched a security plugin, I didn’t pay much attention to the security of the admin account of WordPress and relied only on the complexity of the password.

    The most dangerous thing is when the bots use brute-force; pick up the password to the administrator account of the site. This can lead to very serious problems, as the attacker gets full access to the administrator account. On your website can be added malicious code, the site can be added to a botnet and participate in other attacks or the spread of viruses. The consequences for the reputation can be very sad.

    When the security plugin was launched I began to receive reports on the work of the plugin in which specify the statistics of failed login attempts to the admin account of WordPress. And for each day of such attempts was from 4 to 25, from different IP addresses. These were attempts of bots password guessing.

    What I noticed:

    1. Bots knew my login and password was selected to it.
    2. I do not use the default username Admin and changed it.
    3. In the blog there are other admin accounts, but attempts to break them for a few days of observation did not happen.

    Wondering how the bots found out my account and why not try to hack other accounts of administrators? Quite simply, under my account I place posts and write comments, and other accounts are made for employees, host and other people that perform actions only in the dashboard of the website.

    Based on this, I realized that the bots find out the login via the parsing of pages. Many publish posts and comments from the admin account.

    For example, you publish a blog post; the link to the author will be like this http://example.com/author/admin***/. Bots browsing the code of your website looking for recordings of this type on all pages of the website and collect links from all accounts.

    The same thing will happen if you write a comment from the admin account, only the link will be a bit of a different kind http://example.com/members/admin***/

    Even if you once published a post or comment from admin account, then the bots will find it and will try to crack it.

    I described one of the possible scenarios of obtaining a list of accounts for hacking, there may be others. But experience has shown that if the WordPress administrator account is not used for publications and comments on the website, its bots do not know.

    What to do in order to minimize the possibility of hacking the account of the administrator of the website.

    1. Not to publish posts and comments from the administrator account.
    2. Create an account for each administrator with another role such as Author or Editor. It all depends on your needs.
    3. Change the current administrator user. Attention! Before that, you need to backup your website and databases. I can’t recommend this and if you do this at your own risk, as this may lead to undesirable consequences.

    You will need to create a new user with administrator rights and a user with another role such as Author. Login to the dashboard with the new account and test the capabilities of the Administrator to manage site, settings and users.

    Go to the “Users” and delete the previous admin account, WordPress will ask you to whom to reassign the articles and comments, here is useful pre-created user Author. Reassign articles on it and in the future use to publish posts and comments.

    These actions can be done for other accounts administrators. But for most WordPress users would rather to install one of the plugins for protection from brute-force attacks, such as plugin Security & Firewall from CleanTalk.

  • A brief history of passwords from the P to the S: birth, death and the zombie apocalypse

    A brief history of passwords from the P to the S: birth, death and the zombie apocalypse

    The attack on the World Trade Center towers on 11 September 2001 claimed the lives of 658 employees of the financial company Cantor Fitzgerald. Its Director Howard Lutnick lost that day his brother, faced with an unprecedented problem. And it wasn’t even that the company’s servers, including backup, was also buried under the rubble. Information was partly available, but it was closed behind hundreds of accounts of perished colleagues. For assistance was attracted experts from the company Microsoft, they have used powerful servers for fast brute-force — from data access depended on the existence of the company, and it was necessary to time for the first opening of trading after the attacks. To accelerate the breaking could personal data of the victims. Lutnick had to call relatives, and, at the most inopportune moment, ask them a series of questions: the wedding day, the name of the college or university, the dog’s name.

    (more…)

  • API Method to Getting Country Code by IP Address.

    We are pleased to announce the launch of a new API method.

    Now you can get a country code to identify the country by IP address by one API call.

    The API method returns a 2 letters country code (US, UK, CN and etc) or full country name (Germany, Canada) for an IP address. Limit to the number of data is 1000.

    The instruction of how to use this method you can find here.
    https://cleantalk.org/help/api-ip-info-country-code

  • CleanTalk launches a project to ensure the safety of websites

    CleanTalk launches a major project to create a cloud service for the safety of websites. The project will include several functions: protect the site against brute force attacks, vulnerability scanner and virus removal.

    Each function will have a number of features which help you easily keep the website safe from hackers.

    (more…)