Category: CleanTalk

  • New features for spam comments management on WordPress

    New features for spam comments management on WordPress

    For WordPress users of the service, we have added the new possibilities to manage spam comments.
    By default, all spam comments are placed in the spam folder, now you can change the way the plugin deals with spam comments:

    1. Move to Spam Folder. You can prevent the proliferation of spam folder. It can be cleaned automatically using the option “Keep spam comments for 15 days.” Enable this option in the settings of the plugin: WP Dashboard-Settings-Anti-Spam by CleanTalk->

    2. Move to Trash. All spam comments will be placed in the folder “Trash” in the WordPress Comments section except comments with Stop-Words. Stop-Word comments will be always stored in the “Pending” folder.

    3. Ban comments without moving to WordPress Backend. All spam comments will be deleted permanently without going to the WordPress backend except comments with Stop-Words. Stop-Word comments will be always stored in the “Pending” folder. What comments were blocked and banned can be seen in the Anti-Spam Log.

    To manage the actions with spam comments, go to the Control Panel, select the website you want to change the actions for and go to “Settings” under the name of the website. On the website settings page, select the item from the “SPAM comment action:” the necessary settings and click “Save” button at the bottom of the page.

  • Exotic HTTP headers

    Hello! This article will illustrate the result of applying some important and exotic HTTP headers, most of which are related to security.

    X-XSS-Protection

    Attack XSS (cross-site scripting) is a type of attack in which malicious code can be embedded in the target page.
    For example like this:

    <h1>Hello, <script>alert('hacked')</script></h1>

    This type of attacks easy to detect and the browser may handle it: if the source code contains part of the request, it may be a threat.

    And the title X-XSS-Protection manages the behavior of the browser.

    Accepted values:

    • 0 the filter is turned off
    • 1 filter is enabled. If the attack is detected, the browser will remove the malicious code.
    • 1; mode=block. The filter is enabled, but if the attack is detected, the page will not be loaded by the browser.
    • 1; report=http://domain/url. the filter is enabled and the browser will clear the page from malicious code while reporting the attempted attack. Here, we use a function Chromium for reporting violation of content security policy (CSP) to a specific address.

    Create a web server sandbox on node.js to see how it works.

    
    var express = require('express')
    var app = express()
    app.use((req, res) => {
     if (req.query.xss) res.setHeader('X-XSS-Protection', req.query.xss)
    res.send(`<h1>Hello, ${req.query.user || 'anonymous'}</h1>`)
    })
    
    app.listen(1234)
    
    

    I will use Google Chrome 55.

    No title
    http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E

    Nothing happens, the browser will successfully block the attack. Chrome, by default, blocks the threat and reported it to the console.

    It even highlights the problem area in the source code.

    X-XSS-Protection: 0

    http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=0

    Oh no!

    X-XSS-Protection: 1

    http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=1

    Page was cleared because of the explicit title.0

    X-XSS-Protection: 1; mode=block

    http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=1;%20mode=block

    In this case, the attack will be prevented by blocking the page load.

    X-XSS-Protection: 1; report=http://localhost:1234/report

    http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=1;%20report=http://localhost:1234/report

    The attack is prevented and a message is sent to the appropriate address.

    X-Frame-Options

    With this title you can protect yourself from the so-called Clickjacking.

    Imagine that the attacker has a channel on YouTube and he wants more followers.

    He can create a page with a button “Do not press”, which would mean that everyone will click on it necessarily. But over the button is completely transparent iframe and in this frame hides the channel page with the subscription button. Therefore, when you press the button, in fact, a user subscribes to a channel, unless of course, he was logged into YouTube.

    We will demonstrate that.

    First, you need to install the extension to ignore this header.

    Create a simple page.

    
    <style>
    button { background: red; color: white; padding: 10px 20px; border: none; cursor: pointer; }
    iframe { opacity: 0.8; z-index: 1; position: absolute; top: -570px; left: -80px; width: 500px; height: 650px; }</style>
    
    <button>Do not click his button!</button>
    <iframe src="https://youtu.be/dQw4w9WgXcQ?t=3m33s"></iframe>
    

    As you can see, I have placed the frame with the subscription right over the button (z-index: 1) and so if you try to click it, you actually press the frame. In this example, the frame is not fully transparent, but it can be fixed with the value of opacity: 0.

    In practice, this doesn’t work, because YouTube set the desired heading, but the sense of threat, I hope, is clear.

    To prevent the page to be used in the frame need to use the title X-Frame-Options.

    Accepted values:

    • deny not load the page at all.
    • sameorigin not load if the source is not the same.
    • allow-from: DOMAIN you can specify the domain from which the page can be loaded in a frame.

    We need a web server to demonstrate

    var express = require('express')
    
     
    for (let port of [1234, 4321]) {
     var app = express()
    app.use('/iframe', (req, res) => res.send(`<h1>iframe</h1><iframe src="//localhost:1234?h=${req.query.h || ''}"></iframe>`))
    app.use((req, res) => {
      if (req.query.h) res.setHeader('X-Frame-Options', req.query.h)
    res.send('<h1>Website</h1>')
    })
    app.listen(port)
    }
    

    No title

    Everyone will be able to build our website on localhost:1234 in the frame.

    X-Frame-Options: deny

    The page cannot be used at all in the frame.

    X-Frame-Options: sameorigin

    Only pages with the same source will be able to be built into the frame. The sources are the same, if the domain, port and protocol are the same.

    X-Frame-Options: allow-from localhost:4321

    It seems that Chrome ignores this option, because there is a header Content-Security-Policy (about it will be discussed below). It does not work in Microsoft Edge.

    Below Mozilla Firefox.

    X-Content-Type-Options

    This header prevents attacks spoofing MIME type (<script src=”script.txt”>) or unauthorized hotlinking (<script src=”https://raw.githubusercontent.com/user/repo/branch/file.js”>)

    
    var express = require('express')
    var app = express()
    
    app.use('/script.txt', (req, res) => {
      if (req.query.h) res.header('X-Content-Type-Options', req.query.h)
    res.header('content-type', 'text/plain')
    res.send('alert("hacked")')
    })
    
    app.use((req, res) => {
    res.send(`<h1>Website</h1><script src="/script.txt?h=${req.query.h || ''}"></script>`
    })
    app.listen(1234)
    

    No title

    http://localhost:1234/

    Though script.txt is a text file with type text/plain, it will be launched as a script.

    X-Content-Type-Options: nosniff

    http://localhost:1234/?h=nosniff

    This time the types do not match and the file will not be executed.

    Content-Security-Policy

    It is a relatively new title and helps to reduce the risks of XSS attacks in modern browsers by specifying in the title what resources can be loaded on the page.

    For example, you can ask the browser do not execute inline-scripts and download files only from one domain. Inline-scripts can look not only like <script>…</script>, but also as <h1 onclick=”…”>.

    Let’s see how it works.

    
    var request = require('request')
    
    var express = require('express')
    
     
    
    for (let port of [1234, 4321]) {
    
    var app = express()
    
    app.use('/script.js', (req, res) => {
    
    res.send(`document.querySelector('#${req.query.id}').innerHTML = 'changed ${req.query.id}-script'`)
    
    })
    
    app.use((req, res) => {
    
    var csp = req.query.csp
    
    if (csp) res.header('Content-Security-Policy', csp)
    
    res.send(`
    
    <html>
    
    <body>
    
    <h1>Hello, ${req.query.user || 'anonymous'}</h1>
    
    <p id="inline">this will changed inline-script?</p>
    
    <p id="origin">this will changed origin-script?</p>
    
    <p id="remote">this will changed remote-script?</p>
    
    <script>document.querySelector('#inline').innerHTML = 'changed inline-script'</script>
    
    <script src="/script.js?id=origin"></script>
    
    <script src="//localhost:1234/script.js?id=remote"></script>
    
    </body>
    
    </html>
    
    `)
    
    })
    
    app.listen(port)
    
    }
    

    No title

    It works as you would expect

    Content-Security-Policy: default-src ‘none’

    http://localhost:4321/?csp=default-src%20%27none%27&user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E

    default-src applies a rule to all resources (images, scripts, frames, etc.), the value ‘none’ disables all. Below is shown what happens and the errors displayed in the browser.

    Chrome refused to run any scripts. In this case, you can’t even upload a favicon.ico.

    Content-Security-Policy: default-src ‘self’

    http://localhost:4321/?csp=default-src%20%27self%27&user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E

    Now it is possible to use the resources from one source but still cannot run external and inline-scripts.

    Content-Security-Policy: default-src ‘self’; script-src ‘self’ ‘unsafe-inline’

    http://localhost:4321/?csp=default-src%20%27self%27;%20script-src%20%27self%27%20%27unsafe-inline%27&user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E

    This time we let the execution and inline-scripts. Please note that XSS attack in the request was blocked too. But this will not happen if at the same time deliver and unsafe-inline and X-XSS-Protection: 0.

    Other values

    On the website, content-security-policy.com beautifully had shown many examples.

    • default-src ‘self’ allowed resources only from one source
    • script-src ‘self’ www.google-analytics.com ajax.googleapis.com allow Google Analytics, Google AJAX CDN, and resources from one source.
    • default-src ‘none’; script-src ‘self’; connect-src ‘self’; img-src ‘self’; style-src ‘self’; allow images, scripts, AJAX and CSS from one source and prohibit the downloading of any other resources. For most sites this is a good initial setting.

    I didn’t check, but I think that the following headers are equivalent:

    • frame-ancestors ‘none’ and X-Frame-Options: deny
    • frame-ancestors ‘self’ and X-Frame-Options: sameorigin
    • frame-ancestors localhost:4321 and X-Frame-Options: allow-from localhost:4321
    • script-src ‘self’ without ‘unsafe-inline’ and X-XSS-Protection: 1

    If you look at the headers facebook.com or twitter.com it is possible to notice that these sites use a lot of CSP.

    Strict-Transport-Security

    HTTP Strict Transport Security (HSTS) is a mechanism for security policy, which helps protect the website from attempts by an unsecured connection.

    Let’s say that we want to connect to facebook.com. If you don’t specify before requesting https://, protocol, by default, will be selected HTTP and therefore the request will look like http://facebook.com.

    
    $ curl -I facebook.com
    HTTP/1.1 301 Moved Permanently
    Location: https://facebook.com/
    

    After that, we will be redirected to the secure version of Facebook.

    If you connect to a public WiFi hotspot, which is owned by the attacker, the request may be intercepted and instead facebook.com the attacker may substitute a similar page to know the username and password.

    To guard against such an attack, you can use the aforementioned title that will tell the client the next time to use the https-version of the site.

    
    $ curl -I https://www.facebook.com/
    HTTP/1.1 200 OK
    Strict-Transport-Security: max-age=15552000; preload
    

    If the user was logged into Facebook at home and then tried to open it from an unsafe access point, he is not in danger, because browsers remember the title.

    But what happens if you connect to the unsecured network first time? In this case, the protection will not work.

    But browsers have a trump card in this case. They have a predefined list of domains for which should be used HTTPS only.

    You can send your domain at this address. It is also possible to find out whether the header is used correctly.

    Accepted values:

    • max-age=15552000 the time in seconds that the browser should remember the title.
    • includeSubDomains If you specify this optional value, the header applies to all subdomains.
    • preload if the site owner wants the domain got into a predefined list that is supported by Chrome (and used by Firefox and Safari).

    And if you need to switch to HTTP before the expiration of max-age or if you set preload? You can put the value max-age value=0 and then the navigation rule to the https version will stop to work.

    Public-Key-Pins

    HTTP Public Key Pinning (HPKP) is a mechanism for security policy that allows HTTPS sites to protect against malicious use of fake or fraudulent certificates.

    Accepted values:

    • pin-sha256=”<sha256>” in quotes is encoded using Base64 thumbprint of the Subject Public Key Information (SPKI). You can specify multiple pins for different public keys. Some browsers in the future may use other hashing algorithms besides SHA-256.
    • max-age=<seconds> the time, in seconds, that for access to the site need to use only the listed keys.
    • includeSubDomains if you specify this optional parameter, the title applies to all subdomains.
    • report-uri=”<URL>” if you specify URL, then when a validation error key, the corresponding message will be sent to the specified address.

    Instead of the title Public-Key-Pins, you can use Public-Key-Pins-Report-Only, in this case, it will only send the error messages to match the keys, but the browser will still load the page.

    So does Facebook:

    
    $ curl -I https://www.facebook.com/
    
    HTTP/1.1 200 OK
    
    ...
    
    Public-Key-Pins-Report-Only:
    
    max-age=500;
    
    pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18=";
    
    pin-sha256="r/mIkG3eEpVdm+u/ko/cwxzOMo1bk4TyHIlByibiA5E=";
    
    pin-sha256="q4PO2G2cbkZhZ82+JgmRUyGMoAeozA+BSXVXQWB8XWQ=";
    
    report-uri="http://reports.fb.com/hpkp/"
    

    Why is it necessary? Not enough of trusted certification authorities (CA)?

    An attacker can create a certificate for facebook.com and by tricking the user to add it to your list of trusted certificates, or it can be an administrator.

    Let’s try to create a certificate for facebook.

    
    sudo mkdir /etc/certs
    
    echo -e 'US\nCA\nSF\nFB\nXX\nwww.facebook.com\nn*@sp**.org' | \
    
    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    
    -keyout /etc/certs/facebook.key \
    
    -out /etc/certs/facebook.crt
    

    And make it trusted in the local system.

    
    # curl
    
    sudo cp /etc/certs/*.crt /usr/local/share/ca-certificates/
    
    sudo update-ca-certificates
    
    # Google Chrome
    
    sudo apt install libnss3-tools -y
    
    certutil -A -t "C,," -n "FB" -d sql:$HOME/.pki/nssdb -i /etc/certs/facebook.crt
    
    # Mozilla Firefox
    
    #certutil -A -t "CP,," -n "FB" -d sql:`ls -1d $HOME/.mozilla/firefox/*.default | head -n 1` -i /etc/certs/facebook.crt
    

    Now run the web server using this certificate.

    
    var fs = require('fs')
    
    var https = require('https')
    
    var express = require('express')
    
     
    
    var options = {
    
    key: fs.readFileSync(`/etc/certs/${process.argv[2]}.key`),
    
    cert: fs.readFileSync(`/etc/certs/${process.argv[2]}.crt`)
    
    }
    
     
    
    var app = express()
    
    app.use((req, res) => res.send(`<h1>hacked</h1>`))
    
    https.createServer(options, app).listen(443)
    

    Switch to the server

    
    echo 127.0.0.1 www.facebook.com | sudo tee -a /etc/hosts
    
    sudo node server.js facebook
    

    Let’s see what happened

    
    $ curl https://www.facebook.com
    
    <h1>hacked</h1>
    

    Great. curl validates the certificate.

    So as I already went to Facebook and Google Chrome has seen its headers, it should report the attack but to allow the page, right?

    Nope. Keys are not checked because of local root certificate [Public key pinning bypassed]. This is interesting…

    Well, and what about www.google.com?

    
    echo -e 'US\nCA\nSF\nGoogle\nXX\nwww.google.com\nn*@sp**.org' | \
    
    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    
    -keyout /etc/certs/google.key \
    
    -out /etc/certs/google.crt
    
    sudo cp /etc/certs/*.crt /usr/local/share/ca-certificates/
    
    sudo update-ca-certificates
    
    certutil -A -t "C,," -n "Google" -d sql:$HOME/.pki/nssdb -i /etc/certs/google.crt
    
    echo 127.0.0.1 www.google.com | sudo tee -a /etc/hosts
    
    sudo node server.js google
    

    The same result. I think this is a feature.

    But in any case, if you do not add these certificates to the local store, open websites will not work because the option to continue with an insecure connection in Chrome or add an exception in Firefox will not.

    Content-Encoding: br

    Data is compressed with Brotli.

    The algorithm promises better compression than gzip and comparable speed unzipping. Supports Google Chrome.

    Of course, there is a module for in node.js.

    
    var shrinkRay = require('shrink-ray')
    
    var request = require('request')
    
    var express = require('express')
    
     
    
    request('https://www.gutenberg.org/files/1342/1342-0.txt', (err, res, text) => {
    
    if (err) throw new Error(err)
    
    var app = express()
    
    app.use(shrinkRay())
    
    app.use((req, res) => res.header('content-type', 'text/plain').send(text))
    
    app.listen(1234)
    
    })
    

    Original size: 700 KB

    Brotli: 204 KB

    Gzip: 241 KB

    Timing-Allow-Origin

    Using the Resource Timing API, you can find out how much time took the processing of resources on the page.

    Because the information of load-time may be used to determine whether the user visited the page before this (paying attention to the fact that resources can be cached), a standard is considered to be vulnerable, if you give this information to any hosts.

    
    <script>
    
    setTimeout(function() {
    
    console.log(window.performance.getEntriesByType('resource'))
    
    }, 1000)
    
    </script>
    
     
    
    <img src="http://placehold.it/350x150">
    
    <img src="/local.gif">
    

    It seems that if you do not specify Timing-Allow-Origin, then get detailed information about the time of the operations (the search domain, for example) is possible only for resources with one source.

    You can use this:

    • Timing-Allow-Origin: *
    • Timing-Allow-Origin: http://foo.com http://bar.com

    Alt-Svc

    The Alternative Services allow resources to be in different parts of the network and access to them can be obtained using different configurations of the protocol.

    This is used in Google:

    • alt-svc: quic=”:443″; ma=2592000; v=”36,35,34″

    This means that the browser, if it wish, can use the QUIC, it is HTTP over UDP, over port 443 the next 30 days (ma = 2592000 seconds, or 720 hours, i.e. 30 days). I have no idea what means the parameter v, version?

    P3P

    Below are some P3P headers that I have seen:

    • P3P: CP=«This is not a P3P policy! See support.google.com/accounts/answer/151657?hl=en for more info.»
    • P3P: CP=«Facebook does not have a P3P policy. Learn why here: fb.me/p3p»

    Some browsers require that cookies of third parties supported the P3P protocol for designation of confidentiality measures.

    The organization, founded P3P, the world wide web Consortium (W3C) halted work on the protocol a few years ago due to the fact that modern browsers don’t end up to support protocol. As a result, P3P is outdated and does not include technologies that are now used in a network, so most sites do not support P3P.

    I didn’t go too far, but apparently the header is needed for IE8 to accept cookies from third parties.

    For example, if IE privacy settings are high, then all cookies from sites that do not have a compact privacy policy will be blocked, but those who have headlines similar to the above, will not be blocked.

    Which of the following HTTP headers You use in projects?

    X-XSS-Protection
    X-Frame-Options
    X-Content-Type-Options
    Content-Security-Policy
    Strict-Transport-Security
    Public-Key-Pins
    Content-Encoding
    Timing-Allow-Origin
    Alt-Svc
    P3P
    Other

    This text is a translation of the article “Экзотичные заголовки HTTP”  published by @A3a on habrahabr.ru.

    About the CleanTalk service

    CleanTalk is a cloud service to protect websites from spambots. CleanTalk uses protection methods that are invisible to the visitors of the website. This allows you to abandon the methods of protection that require the user to prove that he is a human (captcha, question-answer etc.).

  • New anti-spam checks for WordPress, XenForo, phpBB 3.1, SMF, Bitrix

    We are pleased to announce that we have released new versions of plugins for WordPress, XenForo, phpBB 3.1, SMF, Bitrix.

    In the new version, we have added some new checks for spam to improve anti-spam service.

    Mouse tracking and Time zone monitoring give good results against spam bots which simulate the behavior of real visitors.

    These checks for other CMS will be added soon.

    Please, update your anti-spam plugins for latest version:

    WordPress
    XenForo
    phpBB 3.1
    Simple Machines Forum
    Bitrix

  • New version of the Security Service by CleanTalk

    New version of the Security Service by CleanTalk

    As we informed CleanTalk launched its website security project. The service protects administrator control panel from brute-force attacks and records users’ actions.

    Since the 29th of November Security by CleanTalk has become the Cloud Service and now all main data will be available in The Service Dashboard. The cost of the service is $20 per year for 1 website.

    Switching to Cloud Data Storage allows to show more data and use the information more flexible thanks to different filters in your Dashboard.

    In the previous versions all data were being stored in a website database and big amount of information alongside with its operations would affect website speed, all this could give a result of bad website ranking of search engines. Cloud Data Storage is safer than website database. If an intruder could get access to your website then he could delete all data he might be traced with.

    Cloud Service provides data storage for the last 45 days including users action log, brute-force attacks statistics and successful backend logins and you can always get to know who and what actions were made if it is necessary.

  • Breeding Business: from ordinary blog to extraordinary magazine

    Geek at heart, I always have been coding littles projects on localhost and a few failing websites. I guess I never really took Internet seriously.

    Then, I realized these jobs I was doing in luxury hospitality were not making me happy. I just loved coming back home and writing, developing and designing. It’s just what I love. So I started looking at opportunities to generate a very small income that could make a website sustainable. And I had zero money to invest.

    Over the last years, WordPress and blogging have been a huge hit and a lot of people go for it. They think about the monetization before having thought of their content, I took it the other way around.

    Why Blogging About Dog Breeding?

    When I set my mind to start an online blog, I looked at the usual ways of finding the perfect “keyword”, “topic”, “niche”. These include Google Keyword Planner, Google Trends and some paying softwares. I managed to have three topics that seemingly were searched for and that I was happy to write posts on.

    Then, I picked the best topics and started writing. And this is when I realized I couldn’t write on anything else than what I truly loved — responsible and ethical dog breeding. I was writing one article after another. It just felt right.

    Breeding dogs is something that has been running through several generations in my family and although I haven’t done it extensively myself, I am passionate by the canine genetics and mechanisms that make you have the best bloodline of all.

    Dog breeding is a passion of mine and it would be hard for me not to write about it.

    What Is Breeding Business?

    Breeding Business was born after I wrote a few articles. I was going on Facebook Groups at the time to promote my articles (and eventually got suspended!) because Google wasn’t sending me enough traffic at first.

    The website consists of a lot of articles written and published in different categories: how-to’s, interviews of breeders, reviews of dog breeding supplies, and obviously in-depth articles on how to breed dogs.

    After just a few weeks, some visitors started asking what books were we recommending. Unfortunately most books are either too narrow in their topics or too breed-specific. A dog is a dog and the principles remain the same for a Chihuahua or a Rottweiler.

    Therefore, we created our very own ebook, The Dog Breeder’s Handbook. It was created on iBooks Author since it’s a free application built by Apple and at the time, I didn’t know if the ebook was going to be a hit, or a miss. I like to be in motion, try things and if they fail, move on to the next one.

    The Dog Breeder’s Handbook offers all the theoretical knowledge dog breeders need and a lot of actionable tips for them to put into practice. Yet, the launch was slow because the traffic was low. It was definitely generating a few hundred dollars every month. This is what kept me going and made me believe in it even more.

    From then on, I thought I was going to add another product many visitors were hinting at: a WordPress plugin for dog breeders. I built it in few weeks and it is today a very good seller. I release updates using the feedback loop and have a similar project to be released soon.

    Challenges When Growing a Simple Blog Into an Online Magazine

    Being alone and seeing the traffic (and revenue) growing, questions start to pop in your mind.

    It’s time for some business decisions

    A blogger and solo-entrepreneur always strives for steady growth. I do not identify myself with mega-growth startups we read about everywhere. To each their own!

    With Breeding Business, the growth has been great especially since Google sent traffic our way. No specific strategy that we followed, we just put out great content. Often.

    Yet, we’re still asking ourselves a million of questions…

    • Should I add another product or should I focus and grow these?
    • Communities around blogs are hype, should I make one?
    • Is the traffic growth normal or too slow?
    • Subscriptions are so popular these days, but what to offer?

    These are business decisions to make. I added another product: a course. It never took off mainly because it was kind of duplication what was in the ebook. We’re thinking a new use for courses for the future because I could see people were interested.

    Communities are great but there is nothing worse than a dead forum so we never took that risk and are waiting to have a bigger email list to perhaps one day launch a community. Subscriptions are great but just not for us right now. A lot of blogs start charging a monthly or yearly fee for members to be part of a special club but most of them see a huge churn and give that model up after a few months.

    Growth requires a technical overhaul, too

    Our traffic has been growing very well thanks to search engines. This is why we needed a quality anti-spam and CleanTalk has been doing a sublime job at keeping these fake user accounts and comments away.

    With traffic growth comes a whole new set of interrogations:

    • Why am I not converting more visitors into optins or customers?
    • GTmetrix and page speed tests are giving me low scores, how can I optimize my website?
    • Why so many people read one article and leave?

    These are technical issues that truly take time to be fixed. There are mainly two ways we could tackle these:

    1. Patch each little issue one by one
    2. Build a brand new website from scratch with these issues factored in

    After a few months, we were patching issues one by one but today, I am almost finished with a brand new version of the website to be released in two or three months after extensive testing. We’re also pairing that new website with a move from cloud hosting to a VPS (ten folding the monthly hosting cost…)

    Restructure the tree of information

    Our current website was up and running when we had around 20-30 articles. We have over 300 articles today. People aren’t visiting other pages because the information is badly structured and they can’t find their way around.

    Categories are being completely revamped. Stuff we thought was going to attract a lot of people, ended up being a graveyard and vice versa. So we’re cleaning the way the posts are categorized and tagged while updating old pages as well.

    Speed and page load

    Google is apparently using your website’s loading speed as a signal to decide on your ranking. My website is currently performing very poorly in terms of page load speed.

    And these results are after several fixes here and there. So it’s the second main focus for the update. We’re also making sure the website loads much much faster on mobile devices thanks to wp_is_mobile(), the WordPress function to detect mobile devices. We load lower-quality images, less widgets.

    Another WordPress optimisation is the use of the Transients API for our most repeated and complicated queries such as our top menu, footer, home queries, related posts, etc. The way it works is simple and allows you to store cached data in the database temporarily. Instead of retrieving the full menu at each page load, using a transient only requires a single database call for the menu to be fetched.

    Add new UX features

    The new version of Breeding Business brings its own set of new UX features. More AJAX calls, less page refreshes. More white spaces and an easier scroll through our entire page. We’ve also decluttered the article’s footer so our calls to action can jump to my visitors’ eyes.

    Conclusion is… One man can only do so much!

    Everything is wrote here is what I do daily. Article writing, support emails, plugin updates, website updates, email outreach, designing illustrations, social media promotions, bookkeeping and accounting, strategizing and long-term planning, etc. And I’m not helping myself by adding a new recurring item to our new upcoming version: biweekly giveaways!

    Over the last weeks, I realized how stupid it is to rely on your own self only. It’s self-destructive and counterproductive. I genuinely believe that delegating any of these tasks will result in a loss of quality and will cost me money.

    Yet, I have to leave my ego at the door and put some faith in other people. Sure, I may work with some disappointing people at first but it is also my duty to teach them how I want them to work.

    This is my focus for 2017 — learn how to surround myself with the right people (or person) to free some time for me to focus on what I do best.

     

    About the author

    Lazhar is the founder of Breeding Business, a free online magazine educating responsible dog breeders all around the world through in-depth dog breeding articles, interviews, ebooks and comprehensive guides.

  • What is AMP (Accelerated Mobile Pages)? How to setup CleanTalk for AMP

    What is AMP?

    Accelerated Mobile Pages — it’s the tool for static content web-page creation with almost instant load for mobile devices. It consists of three parts:

    1. AMP HTML — it’s HTML with limitations for reliable performance and some extensions for building rich content.
    2. AMP JS — is library which ensures the fast rendering of pages. Third-party JavaScripts are forbidden.
    3. Google AMP Cache — is a proxy-based content delivery network for delivering all valid AMP documents.  It fetches AMP HTML pages, caches and improves page performance automatically.

    Advantages

    • Lightweight version of standard web-pages with high speed load.
    • Instant multimedia content load: videos, animations, graphics.
    • Identical encoding — the same fast rendered website content on different devices.
    • AMP project is open source, it enables free information sharing and ideas contribution.
    • Possible advantage in SEO as page load speed is one of the ranking factors.
    • There are plugins for popular CMS to make AMP usage easier in your website.

    How to use it in WordPress

    When you choose what AMP plugin to use keep in mind the following:

    — Integration with SEO plugin for attaching corresponding metadata.

    — Analytics gathering with traffic tracking of your AMP page.

    — Displaying ads if you are a publisher.

    Available plugins in the WordPress catalog:

    1. AMP by Automattic
    2. Facebook Instant Articles & Google AMP Pages by PageFrog
    3. AMP – Accelerated Mobile Pages
    4. AMP Supremacy
    5. Custom AMP (requires installed AMP by Automattic)

    As example let’s install and activate AMP by Automattic and create a new post with multimedia content. Please, take note that not page but post. Pages and archives are not currently supported.

    AMP by Automattic plugin converts your post into accelerated version of the post automatically and you don’t have to duplicate by yourself. Just add /amp/ (or ?amp=1) to the end of your link and that would be enough.

    How to setup CleanTalk for AMP

    Please, make sure that the option “Use AJAX for JavaScript check” is disabled as it will prevent regular JavaScript execution.

    The option is here:

    WordPress Admin Page —> Settings —> CleanTalk and uncheck SpamFireWall.  

    Then, click on Advanced settings —> disable “Use AJAX for JavaScript check” —> Save Changes.

    Other options will not interrupt AMP post functioning. The CleanTalk Anti-Spam plugin will protect all data sending fields that were rendered after the conversion.

    For now, most AMP plugins remove the possibility to comments and send contact form data on accelerated pages.

    Google validation

    Now you need to validate your website structured data using the tool “Google Validator”:

    https://search.google.com/structured-data/testing-tool/

    If you don’t do this a search bot will not simply pay its attention to your post and no one will see it in the search results.

    Copy and paste the link to your AMP post and see the result. Fix the problems you will be pointed at.

    After that your AMP version of the post will be ready to use.

    Links

    AMP project:
    https://www.ampproject.org/

    AMP blog:
    https://amphtml.wordpress.com/

    AMP plugins in the WordPress catalog:
    https://wordpress.org/plugins/search.php?q=AMP

    Google Search recommendations of how to create accelerated mobile pages:
    https://support.google.com/webmasters/answer/6340290?hl=en

  • How to protect a Linux system: 10 tips

    How to protect a Linux system: 10 tips

    At the annual LinuxCon conference in 2015 the Creator of the GNU/Linux core Linus Torvalds has shared his opinion about the safety of the system. He stressed the need to mitigate the effect of the presence of certain bugs by competent protection order in violation of one component to the next layer overlaps the problem.

    In this article we will try to uncover this subject from a practical point of view:

    • start with the presets and recommendations for choosing and installing Linux distributions;
    • then talk about simple and effective item of protection — security update;
    • next, consider how to set restrictions for programs and users.
    • how to secure the connection to the server via SSH;
    • we give some examples of configuring firewall and limit unwanted traffic;
    • in the concluding part will explain how to disable unnecessary programs and services, as further to protect the servers from intruders.
    1. To configure the environment preloading before installing Linux

    Take care of the security of the system is necessary before installing Linux. Here is a set of recommendations for the settings of the computer, which should be considered and executed before the installation of the operating system:

    • Booting in UEFI mode (not legacy BIOS –a sub-section of it below)
    • Set a password on the UEFI setup
    • Activate SecureBoot mode
    • Set a password on UEFI level to boot the system
    1. Select the appropriate Linux distribution

    Most likely, you will choose popular distributions — Fedora, Ubuntu, Arch, Debian, or other similar branches. In any case, you need to consider the obligatory presence of these functions:

    • Support of forced (MAC) and role-based access control (RBAC): SELinux/AppArmor/GrSecurity
    • Publication of security bulletins
    • Regular release of security updates
    • Cryptographic verification of packages
    • Support for UEFI and SecureBoot
    • Support of full native disk encryption

    Recommendations for installing distributions

    All distributions are different, but there are moments that are worth to pay attention and perform:

    • Use full disk encryption (LUKS) with reliable key phrase
    • The process of paging needs to be encrypted
    • Set a password for editing the boot-loader
    • Reliable password on root access
    • Use an account without the privileges, belongs to the group of administrators
    • Set for user a strong password different from the password for root
    1. Set up automatic security updates

    One of the main ways to ensure the safety of the operating system – to update the software. Updates often fix found bugs and critical vulnerabilities.

    In the case of server systems, there is the risk of failure during the upgrade, but, in our opinion, problems can be minimized if automatically install only security update.

    Auto-update works only for installed from the repositories, not compiled independently packages:

    • In Debian/Ubuntu for updates use the package unattended upgrades
    • In CentOS to auto-update use yum-cron
    • In Fedora for these purposes there is the dnf-automatic

    To upgrade, use any of the available RPM-managers of packages by commands:

    yum update

    or

    apt-get update && apt-get upgrade

    Linux can be configured to send notifications of new updates by email.

    Also , to maintain the security of the Linux core there are protective extensions, e.g. SELinux. This extension will help keep the system from incorrectly configured or dangerous programs.

    SELinux is a flexible system of forced access control, which can work simultaneously with selective access control system. Running programs are allowed to access files, sockets and other processes, and SELinux sets limits so that harmful applications are unable to break the system.

    1. Limit access to external systems

    Next after the update method of protection is to limit access to external services. For this you need to edit the file /etc/hosts.allow and /etc/hosts.deny.

    Here is an example of how to restrict access to telnet and ftp:

    In file /etc/hosts.allow:

    hosts.allow in.telnetd: 123.12.41., 126.27.18., .mydomain.name, .another.name  
    in.ftpd: 123.12.41., 126.27.18., .mydomain.name, .another.name

    Example of the above will allow you to perform telnet and ftp connections to any host in IP-classes 123.12.41.* and 126.27.18.*, and also the host with the domain mydomain.name and another.name.

    Next, in file /etc/hosts.deny’:

    hosts.deny 
    in.telnetd: ALL 
    in.ftpd: ALL

    Adding a user with limited rights

    We do not recommend to connect to the server as root user — it has the right to run any commands, even critical to the system. Therefore, it is better to create user with restricted rights and work through it. Administration can be performed through sudo (substitute user and do) – this is a temporary elevation to administrator level.

    How to create a new user:

    In Debian and Ubuntu:

    Create a user, replacing administrator with the desired name and specify the password in response to the request.  Input password characters are not displayed it the command line:

    adduser administrator

    Add the user to the sudo group:

    adduser administrator sudo

    Now you can use the prefix sudo when executing commands that require administrator rights, for example:

    sudo apt-get install htop

    In CentOS and Fedora:

    Create a user, replacing administrator with your desired name, and create a password for his account:

    useradd adminstrator && passwd administrator

    Add the user to the group wheel for the transfer of the rights sudo:

    usermod –aG wheel administrator

    Use only strong passwords — minimum of 8 letters of the different register, digits and other special characters. To search for weak passwords among users of your server, use the utilities as “John the ripper”, change the settings in file pam_cracklib.so to set passwords forcibly.

    Set the expiration period of the password with the command chage:

    chage -M 60 -m 7 -W 7 UserName

    Disable password aging with the command:

    chage -M 99999 UserName

    Find out when a user’s password will expire:

    chage -l UserName

    Also, you can edit the fields in the file /etc/shadow:

    {UserName}:{password}:{lastpasswdchanged}:{Minimum_days}:{Maximum_days}:{Warn}:{Inactive}:{Expire}:

    where

    • Minimum_days: the Minimum number of days before the expiration of the password.
    • Maximum_days: the Maximum number of days before password expiration.
    • Warn: Number of days before expiration when the user will be warned of the approaching day shift.
    • Expire: the exact date of the expiration of the login.

    Also it is necessary to limit reuse of old passwords in module pam_unix.so to set a limit on the number of failed login attempts of the user.

    To see the number of failed login attempts:

    faillog

    Unblock account after failed login:

    faillog -r -u UserName

    To lock and unlock accounts, you can use the command passwd:

    lock account
    
    passwd -l UserName
    unlocak account
    
    passwd -u UserName

    To make sure that all users set passwords with the command:

    awk -F: '($2 == "") {print}' /etc/shadow

    To block users without passwords:

    passwd -l UserName

    Make sure that the UID parameter was set to 0 only for root account. Enter this command to see all users with UID equal to 0.

    awk -F: '($3 == "0") {print}' /etc/passwd

    You should see only:

    root:x:0:0:root:/root:/bin/bash

    If there are other lines, then check whether you have installed for them UID to 0, delete unnecessary lines.

    1. Set access rights for users

    After you install the password is worth to make sure that all users have access appropriate to their rank and responsibility. In Linux you can set access permissions on files and directories. So there is the ability to create and control different levels of access for different users.

    Access categories

    Linux is based on work with multiple users, so each file belongs to one specific user. Even if the server is administered by one person for various programs created multiple accounts.

    To view users in the system with the command:

    cat /etc/passwd

    The file /etc/passwd contains a line for each user of the operating system. Under services and applications can be created separate users who will also be present in this file.

    In addition to the individual accounts there is a category of access for groups. Each file belongs to one group. One user can belong to several groups.

    View the groups to which belongs your account, use the command:

    groups

    Display a list of all groups in the system, where the first field indicates the name of the group:

    cat /etc/group

    There is a category of access “other”, if the user does not have access to the file and does not belong to the group.

    Types of access

    For categories of users there is the ability to set types of access. Usually it’s right to run, read and modify the file. In Linux, access types are marked by two types of notations: alphabetic and octal.

    In alphabetic notation, permissions are indicated by letters:

    r = reading

    w = change

    x = start

    In octal notation the level of access to files is determined by the numbers from 0 to 7, where 0 indicates no access, and 7 means full access to modify, read and execute:

    4 = read

    2 = change

    1 = start

    1. Use the keys to connect via SSH

    To connect to the host via SSH is usually used password authentication. We recommend a more secure way – input  a pair of cryptographic keys. In this case, the private key is used instead of a password, which will seriously complicate the selection by brute-force.

    For example, let’s create a key pair. Actions should be performed on the local computer, not on a remote server. In the process of key generation you can specify a password to access them. If you leave this field blank, you will not be able to use the generated keys to store them in keychain-manager of the computer.

    If you have already created the RSA keys before, then skip command generation. To check the existing keys for a start:

    ls ~/.ssh/id_rsa*

    To generate new keys:

    ssh-keygen –b 4096

    Download of the public key to the server

    Replace administrator with the name of the key owner, and 1.1.1.1 with the ip-address of your server. From the local computer, type:

    ssh-copy-id administrator@1.1.1.1

    To test the connection, disconnect and re-connect to server — login must occur with the created keys.

    Setting up SSH

    You can disable connect via SSH as root-user, and to obtain administrator rights to use sudo at the beginning of the command. On the server in the file /etc/ssh/sshd_config you need to find the parameter PermitRootLogin and set the value to no.

    You can also deny SSH connection by entering the password so that all users use keys. In the file /etc/ssh/sshd_config, set for parameter PasswordAuthentification value no. If this line doesn’t exist or it is commented out, respectively, add or uncomment it.

    In Debian or Ubuntu you can enter:

    nano /etc/ssh/sshd_config
    
    ... PasswordAuthentication no

    The connection can also additionally secure with two-factor authentication.

    1. Install firewalls

    Recently was discovered a new vulnerability, allowing to carry out DDoS attacks on servers running Linux. A bug in the core system came with version 3.6 at the end of 2012. The vulnerability allows the hackers to embed viruses into boot files, web page and open up the Tor-connection, with no need for hacking a lot of effort to make — work the IP-spoofing method.

    Maximum damage for encrypted HTTPS connection or SSH – termination of the connection, but in the unsecured traffic, the attacker can put new content, including malware. To protect against such attacks is suitable firewall.

    Block access using Firewall

    Firewall is one of the most important tools for blocking unwanted incoming traffic. We recommend you to skip only really need the traffic and fully deny all the rest.

    To filter packages in most Linux distributions there is iptables controller. Usually it is used by advanced users, and to simplify configuration, you can use utilities UFW on Debian/Ubuntu or FirewallD in Fedora.

    1. Disable unnecessary services

    Experts from the University of Virginia recommend to disable all services that you don’t use. Some background processes installed on the startup and operate to shutdown the system. To configure these programs, you need to check the initialization scripts. Starting services can be done using inetd or xinetd.

    If your system is configured with inetd, in the file /etc/inetd.conf you can edit the list of background programs “demons”, to disable startup of service enough to put in the beginning of the line the sign “#”, turning it from the executable to comment.

    If the system uses xinetd, its configuration will be in the directory /etc/xinetd.d. Every file in the directory defines a service, which can be disabled by specifying the item disable = yes, as in this example:

    service finger
    
    {
    
    socket_type = stream
    
    wait = no
    
    user = nobody
    
    server = /usr/sbin/in.fingerd
    
    disable = yes }

    Also worth checking out an  ongoing processes that are not managed by inetd or xinetd. To configure the startup scripts in the directories /etc/init.d or /etc/inittab. After done the changes, run the command under root account.

    /etc/rc.d/init.d/inet restart

    9.Protect the server physically

    It is impossible to completely defend against malicious attacks with physical access to the server. It is therefore necessary to protect the premises where your system is located. The data centers seriously monitor the safety, restrict access to servers, install security cameras and assign permanent guards.

    To enter the data center all visitors must pass certain stages of authentication. Also, it is strongly recommended to use motion sensors in all areas of the centre.

    1. To protect the server from unauthorized access

    System of unauthorized access or IDS collects data about system configuration and files, and further compares these data with the new changes to determine whether they are harmful for the system.

    For example, tools Tripwire and Aide collected a database of system files and protect them with a set of keys. Psad is used to track suspicious activity by using reports firewall.

    Bro is created for network monitoring, tracking suspicious schemes of actions, collection of statistics, perform system commands, and generating alerts. RKHunter can be used to protect from viruses, most rootkits. This utility checks your system by database of known vulnerabilities and can identify unsafe settings it applications.

    Conclusion

    The above tools and settings will help you to partially protect the system, but safety depends on your behavior and understanding of the situation. Without care, caution and constant self learning all the safety measures might not work.

    This text is a translation of the article “Как обезопасить Linux-систему: 10 советов”  published by @1cloud on habrahabr.ru.

    About the CleanTalk service

    CleanTalk is a cloud service to protect websites from spam bots. CleanTalk uses protection methods that are invisible to the visitors of the website. This allows you to abandon the methods of protection that require the user to prove that he is a human (captcha, question-answer etc.).

  • How to reduce a possibility of brute force attacks on WordPress

    How to reduce a possibility of brute force attacks on WordPress

    Until the moment when CleanTalk launched a security plugin, I didn’t pay much attention to the security of the admin account of WordPress and relied only on the complexity of the password.

    The most dangerous thing is when the bots use brute-force; pick up the password to the administrator account of the site. This can lead to very serious problems, as the attacker gets full access to the administrator account. On your website can be added malicious code, the site can be added to a botnet and participate in other attacks or the spread of viruses. The consequences for the reputation can be very sad.

    When the security plugin was launched I began to receive reports on the work of the plugin in which specify the statistics of failed login attempts to the admin account of WordPress. And for each day of such attempts was from 4 to 25, from different IP addresses. These were attempts of bots password guessing.

    What I noticed:

    1. Bots knew my login and password was selected to it.
    2. I do not use the default username Admin and changed it.
    3. In the blog there are other admin accounts, but attempts to break them for a few days of observation did not happen.

    Wondering how the bots found out my account and why not try to hack other accounts of administrators? Quite simply, under my account I place posts and write comments, and other accounts are made for employees, host and other people that perform actions only in the dashboard of the website.

    Based on this, I realized that the bots find out the login via the parsing of pages. Many publish posts and comments from the admin account.

    For example, you publish a blog post; the link to the author will be like this http://example.com/author/admin***/. Bots browsing the code of your website looking for recordings of this type on all pages of the website and collect links from all accounts.

    The same thing will happen if you write a comment from the admin account, only the link will be a bit of a different kind http://example.com/members/admin***/

    Even if you once published a post or comment from admin account, then the bots will find it and will try to crack it.

    I described one of the possible scenarios of obtaining a list of accounts for hacking, there may be others. But experience has shown that if the WordPress administrator account is not used for publications and comments on the website, its bots do not know.

    What to do in order to minimize the possibility of hacking the account of the administrator of the website.

    1. Not to publish posts and comments from the administrator account.
    2. Create an account for each administrator with another role such as Author or Editor. It all depends on your needs.
    3. Change the current administrator user. Attention! Before that, you need to backup your website and databases. I can’t recommend this and if you do this at your own risk, as this may lead to undesirable consequences.

    You will need to create a new user with administrator rights and a user with another role such as Author. Login to the dashboard with the new account and test the capabilities of the Administrator to manage site, settings and users.

    Go to the “Users” and delete the previous admin account, WordPress will ask you to whom to reassign the articles and comments, here is useful pre-created user Author. Reassign articles on it and in the future use to publish posts and comments.

    These actions can be done for other accounts administrators. But for most WordPress users would rather to install one of the plugins for protection from brute-force attacks, such as plugin Security & Firewall from CleanTalk.

  • CleanTalk launches a project to ensure the safety of websites

    CleanTalk launches a major project to create a cloud service for the safety of websites. The project will include several functions: protect the site against brute force attacks, vulnerability scanner and virus removal.

    Each function will have a number of features which help you easily keep the website safe from hackers.

    (more…)

  • Visualization of attacks, anomalies and security breaches with OpenGraphiti

    Those who visit our headquarters in San Jose (Cisco Systems) always amazes large video wall that displays a picture of attacks in real time with the ability to drill after touching certain areas of the screen.

    However, like any map attacks, and I have collection of already 34, any such visualization is ineffective in real life. Show to superiors, show to journalists, to include in any movie… It’s all useful, but not very applicable in practice. Usually you have your own data sets that are generated by your protection. And you wonder what is happening in your network or directed at your network, but certainly not the beautiful card with the “ballistic missile attacks”, which are drawn by the absolute majority of companies offering visualization services attacks.

    (more…)