PHP & Web Development Blogs

Showing 16 to 20 of blog articles.
12997 views · 5 years ago
Working With Thin Controller And Fat Model Concept In Laravel

Models and controllers are one of the most essential programming handlers in the Laravel MVC framework, and both are used vastly for different functional operations. Models in Laravel are created inside the app folder and are mostly used to interact with the database using Eloquent ORM, while the controllers are located inside the directory App/Http/Controllers.

As a programmer, you should have the knowledge how to keep the balance in between the programming usage of Models and controllers. As which one should be more utilized for allowing functional tasks in applications deployed on any PHP MySQL hosting.


What is the Concept of Thin Controller and FAT Models


The concept of the thin controller and fat model is that we do less work in our controllers and more work in our models. Like we use our controllers to validate our data and then pass it to the models. While in models, we define our actual functional logic and main coding operations of the desired application. This code structuring process is also a very basic concept of MVC and also the differentiating factor from the conventional complex programming which we mistakenly ignore sometimes.


Why FAT Controllers Are Bad For Handling Code


Controllers are always meant to be defined short and concise, and it should only be used for receiving requests and return responses to it. Anything else further should be programmed in Models, which is actually made for main functional operations.

Placing functional logic in controllers can be bad for many reasons for your applications deployed on anyhosting for PHP. As it not only makes code structure long but also makes it complex sometimes. Further placing code in Controllers is also not recommended because if same functionality is needed somewhere else in route, then pulling out the whole code from their becomes difficult and so its reusability in the application.

Though Laravel is an MVC framework while developing on laravel, we sometimes ignore this and write mostly all our code including the extending of App\Model and all our functional logic in controller route methods. What we can do here is we can create a sub model of our parent model. For example, our parent model is User then we can create another sub model of username in CustomerModel if you are using the same User model for all types of users. In this model, we will write all the logic related to user type Customer.

So now let's take an example of my existing blog creating comment system with laravel and vuejs. In that article, you can see I have made so much mess in my controller methods. Mostly, I have written all my comments logic in my methods, so to shorten that let's clean them in this article. Inside app folder, I will create a new file with name CommentModel.php. Inside this file, I will write my whole logic for comment functions. This is my basic file:


<?php
namespace App;

use App\Comment;
use App\CommentVote;
use App\CommentSpam;
use App\User;
use Auth;

class CommentModel
{


}

?>



Right now it contains no function but has the reference of all my models which I required for this model. Let's first add a function namedgetallcomments passing$pageId as a parameter inside it. The function will get all the comments for the given page:


public function getAllComments($pageId)
{
$comments = Comment::where('page_id',$pageId)->get();

$commentsData = [];


foreach ($comments as $key) {
$user = User::find($key->users_id);
$name = $user->name;
$replies = $this->replies($key->id);
$photo = $user->first()->photo_url;
$reply = 0;
$vote = 0;
$voteStatus = 0;
$spam = 0;
if(Auth::user()){
$voteByUser = CommentVote::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();
$spamComment = CommentSpam::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();


if($voteByUser){
$vote = 1;
$voteStatus = $voteByUser->vote;
}

if($spamComment){
$spam = 1;
}
}


if(sizeof($replies) > 0){
$reply = 1;
}

if(!$spam){
array_push($commentsData,[
"name" => $name,
"photo_url" => (string)$photo,
"commentid" => $key->id,
"comment" => $key->comment,
"votes" => $key->votes,
"reply" => $reply,
"votedByUser" =>$vote,
"vote" =>$voteStatus,
"spam" => $spam,
"replies" => $replies,
"date" => $key->created_at->toDateTimeString()
]);
}


}
$collection = collect($commentsData);
return $collection->sortBy('votes');
}



Now I will create another function namedreplies which takes$commentId as a parameter. The function is more or less programmed in the same manner as the upper function get all comments.


protected function replies($commentId)
{
$comments = Comment::where('reply_id',$commentId)->get();
$replies = [];



foreach ($comments as $key) {
$user = User::find($key->users_id);
$name = $user->name;
$photo = $user->first()->photo_url;

$vote = 0;
$voteStatus = 0;
$spam = 0;


if(Auth::user()){
$voteByUser = CommentVote::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();
$spamComment = CommentSpam::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();

if($voteByUser){
$vote = 1;
$voteStatus = $voteByUser->vote;
}

if($spamComment){
$spam = 1;
}
}
if(!$spam){


array_push($replies,[
"name" => $name,
"photo_url" => $photo,
"commentid" => $key->id,
"comment" => $key->comment,
"votes" => $key->votes,
"votedByUser" => $vote,
"vote" => $voteStatus,
"spam" => $spam,
"date" => $key->created_at->toDateTimeString()
]);
}




}


$collection = collect($replies);
return $collection->sortBy('votes');
}



Now lets create a functioncreate comment which passes$array as a parameter in it:


public function createComment($arary)
{
$comment = Comment::create($array);


if($comment)
return [ "status" => "true","commentId" => $comment->id ];
else
return [ "status" => "false" ];
}



Similarly, Now I will create all the function for comment in myCommentModel, so that all the functions gets accumulated in one model.


<?php
namespace App;

use App\Comment;
use App\CommentSpam;
use App\CommentVote;
use App\User;
use Auth;

class CommentModel
{
public function getAllComments($pageId)
{
$comments = Comment::where('page_id', $pageId)->get();

$commentsData = [];

foreach ($comments as $key) {
$user = User::find($key->users_id);
$name = $user->name;
$replies = $this->replies($key->id);
$photo = $user->first()->photo_url;
$reply = 0;
$vote = 0;
$voteStatus = 0;
$spam = 0;
if (Auth::user()) {
$voteByUser = CommentVote::where('comment_id', $key->id)->where('user_id', Auth::user()->id)->first();
$spamComment = CommentSpam::where('comment_id', $key->id)->where('user_id', Auth::user()->id)->first();

if ($voteByUser) {
$vote = 1;
$voteStatus = $voteByUser->vote;
}

if ($spamComment) {
$spam = 1;
}
}

if (sizeof($replies) > 0) {
$reply = 1;
}

if (!$spam) {
array_push($commentsData, [
"name" => $name,
"photo_url" => (string) $photo,
"commentid" => $key->id,
"comment" => $key->comment,
"votes" => $key->votes,
"reply" => $reply,
"votedByUser" => $vote,
"vote" => $voteStatus,
"spam" => $spam,
"replies" => $replies,
"date" => $key->created_at->toDateTimeString(),
]);
}

}
$collection = collect($commentsData);
return $collection->sortBy('votes');
}

protected function replies($commentId)
{
$comments = Comment::where('reply_id', $commentId)->get();
$replies = [];

foreach ($comments as $key) {
$user = User::find($key->users_id);
$name = $user->name;
$photo = $user->first()->photo_url;

$vote = 0;
$voteStatus = 0;
$spam = 0;

if (Auth::user()) {
$voteByUser = CommentVote::where('comment_id', $key->id)->where('user_id', Auth::user()->id)->first();
$spamComment = CommentSpam::where('comment_id', $key->id)->where('user_id', Auth::user()->id)->first();

if ($voteByUser) {
$vote = 1;
$voteStatus = $voteByUser->vote;
}

if ($spamComment) {
$spam = 1;
}
}
if (!$spam) {

array_push($replies, [
"name" => $name,
"photo_url" => $photo,
"commentid" => $key->id,
"comment" => $key->comment,
"votes" => $key->votes,
"votedByUser" => $vote,
"vote" => $voteStatus,
"spam" => $spam,
"date" => $key->created_at->toDateTimeString(),
]);
}

}

$collection = collect($replies);
return $collection->sortBy('votes');
}

public function createComment($arary)
{
$comment = Comment::create($array);

if ($comment) {
return ["status" => "true", "commentId" => $comment->id];
} else {
return ["status" => "false"];
}

}

public function voteComment($commentId, $array)
{
$comments = Comment::find($commentId);
$data = [
"comment_id" => $commentId,
'vote' => $array->vote,
'user_id' => $array->users_id,
];

if ($array->vote == "up") {
$comment = $comments->first();
$vote = $comment->votes;
$vote++;
$comments->votes = $vote;
$comments->save();
}

if ($array->vote == "down") {
$comment = $comments->first();
$vote = $comment->votes;
$vote--;
$comments->votes = $vote;
$comments->save();
}

if (CommentVote::create($data)) {
return true;
}

}

public function spamComment($commentId, $array)
{
$comments = Comment::find($commentId);

$comment = $comments->first();
$spam = $comment->spam;
$spam++;
$comments->spam = $spam;
$comments->save();

$data = [
"comment_id" => $commentId,
'user_id' => $array->users_id,
];

if (CommentSpam::create($data)) {
return true;
}

}
}
?>



Now we have all our required methods inCommentModel. So now let's clean upCommentController which is currently bit complex and lengthy in code structure. As right nowCommentController look like this:


<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Http\Requests;
use App\Comment;
use App\CommentVote;
use App\CommentSpam;
use App\User;
use Auth;

class CommentController extends Controller
{



public function index($pageId)
{
$comments = Comment::where('page_id',$pageId)->get();

$commentsData = [];




foreach ($comments as $key) {
$user = User::find($key->users_id);
$name = $user->name;
$replies = $this->replies($key->id);
$photo = $user->first()->photo_url;
$reply = 0;
$vote = 0;
$voteStatus = 0;
$spam = 0;
if(Auth::user()){
$voteByUser = CommentVote::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();
$spamComment = CommentSpam::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();


if($voteByUser){
$vote = 1;
$voteStatus = $voteByUser->vote;
}

if($spamComment){
$spam = 1;
}
}


if(sizeof($replies) > 0){
$reply = 1;
}

if(!$spam){
array_push($commentsData,[
"name" => $name,
"photo_url" => (string)$photo,
"commentid" => $key->id,
"comment" => $key->comment,
"votes" => $key->votes,
"reply" => $reply,
"votedByUser" =>$vote,
"vote" =>$voteStatus,
"spam" => $spam,
"replies" => $replies,
"date" => $key->created_at->toDateTimeString()
]);
}


}
$collection = collect($commentsData);
return $collection->sortBy('votes');
}

protected function replies($commentId)
{
$comments = Comment::where('reply_id',$commentId)->get();
$replies = [];



foreach ($comments as $key) {
$user = User::find($key->users_id);
$name = $user->name;
$photo = $user->first()->photo_url;

$vote = 0;
$voteStatus = 0;
$spam = 0;


if(Auth::user()){
$voteByUser = CommentVote::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();
$spamComment = CommentSpam::where('comment_id',$key->id)->where('user_id',Auth::user()->id)->first();

if($voteByUser){
$vote = 1;
$voteStatus = $voteByUser->vote;
}

if($spamComment){
$spam = 1;
}
}
if(!$spam){


array_push($replies,[
"name" => $name,
"photo_url" => $photo,
"commentid" => $key->id,
"comment" => $key->comment,
"votes" => $key->votes,
"votedByUser" => $vote,
"vote" => $voteStatus,
"spam" => $spam,
"date" => $key->created_at->toDateTimeString()
]);
}




}


$collection = collect($replies);
return $collection->sortBy('votes');
}


public function store(Request $request)
{
$this->validate($request, [
'comment' => 'required',
'reply_id' => 'filled',
'page_id' => 'filled',
'users_id' => 'required',
]);
$comment = Comment::create($request->all());
if($comment)
return [ "status" => "true","commentId" => $comment->id ];
}


public function update(Request $request, $commentId,$type)
{
if($type == "vote"){


$this->validate($request, [
'vote' => 'required',
'users_id' => 'required',
]);

$comments = Comment::find($commentId);
$data = [
"comment_id" => $commentId,
'vote' => $request->vote,
'user_id' => $request->users_id,
];

if($request->vote == "up"){
$comment = $comments->first();
$vote = $comment->votes;
$vote++;
$comments->votes = $vote;
$comments->save();
}

if($request->vote == "down"){
$comment = $comments->first();
$vote = $comment->votes;
$vote--;
$comments->votes = $vote;
$comments->save();
}

if(CommentVote::create($data))
return "true";
}

if($type == "spam"){


$this->validate($request, [
'users_id' => 'required',
]);

$comments = Comment::find($commentId);


$comment = $comments->first();
$spam = $comment->spam;
$spam++;
$comments->spam = $spam;
$comments->save();

$data = [
"comment_id" => $commentId,
'user_id' => $request->users_id,
];

if(CommentSpam::create($data))
return "true";
}
}


public function destroy($id)
{
}
}?>



After cleaning up the controller it will look much simpler and easy to understand like this:


<?php

namespace App\Http\Controllers;

use App\CommentModel;
use Illuminate\Http\Request;

class CommentController extends Controller
{

private $commentModel = null;
private function __construct()
{
$this->commentModel = new CommentModel();
}


public function index($pageId)
{
return $this->commentModel->getAllComments($pageId);
}


public function store(Request $request)
{
$this->validate($request, [
'comment' => 'required',
'reply_id' => 'filled',
'page_id' => 'filled',
'users_id' => 'required',
]);
return $this->commentModel->createComment($request->all());
}


public function update(Request $request, $commentId, $type)
{
if ($type == "vote") {

$this->validate($request, [
'vote' => 'required',
'users_id' => 'required',
]);

return $this->commentModel->voteComment($commentId, $request->all());
}

if ($type == "spam") {

$this->validate($request, [
'users_id' => 'required',
]);

return $this->commentModel->spamComment($commentId, $request->all());
}
}

}
?>




Wrap Up!


So Isn't it looking much cleaner and simpler to understand now? This is what actually a thin controller and fat model looks like. We have all our logic related to Comment system programmed in ourCommentModel and our controller is now just used to transfer data from the user to our model and returning the response which is coming from our model.

So this is how the structuring of the thin controller and fat model is made. Give your thoughts in the comments below.
12812 views · 4 years ago
Why Cloudways is the Perfect Managed Hosting for PHP Applications

The following is a sponsored blogpost by Cloudways


Developing an application is not the sole thing you should bank on. You must strive to find the best hosting solution to deploy that application also. The application’s speed is dependent on the hosting provider, that is why I always advise you to go for the best hosting solution to get the ultimate app performance.

Now a days, it is a big challenge to choose any web hosting, as each hosting has its own pros and cons which you must know, before considering it finally for the deployment. I don’t recommend shared hosting for PHP/Laravel based applications, because you always get lot of server hassles like downtime, hacking, 500 errors, lousy support and other problems that are part and parcel of shared hosting.

For PHP applications, you must focus on more technical aspects like caching, configs, databases, etc. because these are essential performance points for any vanilla or framework-based PHP application. Additionally, if the app focuses on user engagement (for instance, ecommerce store), the hosting solution should be robust enough to handle spikes in traffic.

Here, I would like to introduce Cloudways PHP server hosting to you which provides easy, developer and designer friendly managed hosting platform. With Cloudways, you don't need to focus on PHP hosting, but must focus on building your application. You can easily launch cloud servers on five providers including DigitalOcean, Linode, Vultr, AWS and GCE.


Cloudways ThunderStack


Being a developer, you must be familiar with the concept of stack - an arrangement of technologies that form the underlying hosting solution.

To provide a blazing fast speed and a glitch-free performance, Cloudways has built a PHP stack, known as ThunderStack. This stack consists of technologies that offer maximum uptime and page load speed to all PHP applications. Check out the following visual representation of ThunderStack and the constituent technologies:


alt_text


As you can see, ThunderStack comprises of a mix of static and dynamic caches with two web servers, Nginx and Apache. This combination ensures the ultimate experience for the users and visitors of your application.


Frameworks and CMS


The strength and popularity of PHP lies in the variety of frameworks and CMS it offers to the developers. Realizing this diversity, Cloudways offers a hassle-free installation of major PHP frameworks including Symfony, Laravel, CakePHP, Zend, and Codeigniter. Similarly, popular CMS such as WordPress, Bolt, Craft, October, Couch, and Coaster CMS - you can install these with the 1-click option. The best part is that if you have a framework or CMS that is not on the list, you can easily install it through Composer.


1-Click PHP Server & Application Installation


Setting up a stack on an unmanaged VPS could take an entire day!

When you opt for Cloudways managed cloud hosting, the entire process of setting up the server, installation of core PHP files and then the setup of the required framework is over in a matter of minutes.

Just sign up at Cloudways, choose your desired cloud provider, and select the PHP stack application.


alt_text


As you can see, your LAMP stack is ready for business in minutes.

Many PHP applications fail because essential services are either turned off or not set up properly. Cloudways offers a centralized location where you can view and set the status of all essential services such as:



* Apache
* Elasticsearch
* Memcached
* MySQL
* PHP-FPM
* Nginx
* New Relic
* Redis
* Varnish


alt_text


Similarly, you can manage SMTP add-ons without any fuss.


Staging Environment


With Cloudways, you can test your web applications for possible bugs and errors before taking it live.

Using the staging feature, developers can first deploy their web sites on test domains where they can analyze the applications performance and potential problems. This helps site administrators to fix those issues timely and view the application performance in real-time.

A default sub domain comes pre-installed with the newly launched application, making it easy for the administrators to test the applications on those testing subdomains. Overall, it's a great feature which helps developers know about the possible errors that may arise during the live deployment.

alt_text

Pre-Installed Composer & Git


PHP development requires working with external libraries and packages. Suppose you are working with Laravel and you need to install an external package. Since Composer has become the standard way of installing packages, it comes preinstalled on the Cloudways platform. Just launch the application and start using Composer in your project.

Similarly, if you are familiar with Git and maintain your project on GitHub or BitBucket, you don’t need to worry about Git installation. Git also comes pre-configured on Cloudways. You can start running commands right after application launch.


Cloudways MySQL Manager


When you work with databases in PHP, you need a database manager. On the Cloudways platform, you will get a custom-built MySQL manager, in which you can perform all the tasks of a typical DB manager.

alt_text


However, if you wish to install and use another database manager like PHPMyAdmin, you can install it by following this simple guide on installing PHPMyadmin.


Server & Application Level SSH


If you use Linux, you typically use SSH for accessing the server(s) and individual applications. A third-party developer requires application and server level access as per the requirements of the client. Cloudways offers SSH access to fit the requirements of the client and users.

alt_text


PHP-FPM, Varnish & Cron Settings


Cloudways provides custom UI panel to set and maintain PHP-FPM and Varnish settings. Although the default configuration is already in place, you can easily change all the settings to suit your own, particular development related requirements. In Varnish settings, you can define URL that you want to exclude from caching. You can also set permissions in this panel.

alt_text


Cron job is a very commonly used component of PHP application development process. On Cloudways platform, you can easily set up Cron jobs in just a few clicks. Just declare the PHP script URL and the time when the script will run.

alt_text


Cloudways API & Personal Assistant Bot


Cloudways provides an internal API that offers all important aspects of the server and application management. Through Cloudways API, you can easily develop, integrate, automate, and manage your servers and web apps on Cloudways Platform using the RESTful API. Check out some of the use cases developed using Cloudways API. You just need your API key and email for authentication of the HTTP calls on API Playground and custom applications.

alt_text


Cloudways employs a smart assistant named CloudwaysBot to notify all users about server and application level issues. CloudwaysBot sends the notifications on pre-approved channels including email, Slack and popular task management tools such as Asana and Trello.


Run Your APIs on PHP Stack


Do you have your own API which you want to run on the PHP stack? No problem, because you can do that, too with Cloudways! You can also use REST API like Slim, Silex, Lumen, and others. You can use APIs to speed up performance and require fast servers with lots of resources. So, if you think that your API response time is getting slower due to the large number of requests, you can easily scale your server(s) with a click to address the situation.


Team Collaboration


When you work on a large number of applications with multiple developers, you need to assign them on any specific application. Cloudways provides an awesome feature of team collaboration through which you can assign developers to specific application and give access to them. You can use this tool to assign one developer to multiple applications. Through team feature, you can connect the team together and work on single platform. Access can be of different type; i.e. billing, support and console. You can either give the full access or a limited one by selecting the features in Team tab.

alt_text


Final Words


Managed cloud hosting ensures that you are not bothered by any hosting or server related issues. For practical purposes, this means that developers can concentrate on writing awesome code without worrying about underlying infrastructure and hosting related issues. Do sign up and check out Cloudways for the best and the most cost-effective cloud hosting solution for your next PHP project!
12422 views · 5 years ago
Creating a PHP Daemon Service

What is a Daemon?

The term daemon was coined by the programmers of Project MAC at MIT. It is inspired on Maxwell's demon in charge of sorting molecules in the background. The UNIX systems adopted this terminology for daemon programs.

It also refers to a character from Greek mythology that performs the tasks for which the gods do not want to take. As stated in the "Reference System Administrator UNIX", in ancient Greece, the concept of "personal daemon" was, in part, comparable to the modern concept of "guardian angel." BSD family of operating systems use the image as a demon's logo.

Daemons are usually started at machine boot time. In the technical sense, a demon is considered a process that does not have a controlling terminal, and accordingly there is no user interface. Most often, the ancestor process of the deamon is init - process root on UNIX, although many daemons run from special rcd scripts started from a terminal console.

Richard Stevenson describes the following steps for writing daemons:
    . Resetting the file mode creation mask to 0 function umask(), to mask some bits of access rights from the starting process.
    . Cause fork() and finish the parent process. This is done so that if the process was launched as a group, the shell believes that the group finished at the same time, the child inherits the process group ID of the parent and gets its own process ID. This ensures that it will not become process group leader.
    . Create a new session by calling setsid(). The process becomes a leader of the new session, the leader of a new group of processes and loses the control of the terminal.
    . Make the root directory of the current working directory as the current directory will be mounted.
    . Close all file descriptors.
    . Make redirect descriptors 0,1 and 2 (STDIN, STDOUT and STDERR) to /dev/null or files /var/log/project_name.out because some standard library functions use these descriptors.
    . Record the pid (process ID number) in the pid-file: /var/run/projectname.pid.
    . Correctly process the signals and SigTerm SigHup: end with the destruction of all child processes and pid - files and / or re-configuration.

How to Create Daemons in PHP

To create demons in PHP you need to use the extensions pcntl and posix. To implement the fast communication withing daemon scripts it is recommended to use the extension libevent for asynchronous I/O.

Lets take a closer look at the code to start a daemon:
umask(0);
$pid = pcntl_fork(); 
if ($pid < 0) {
print('fork failed');
exit 1;
}


After a fork, the execution of the program works as if there are two branches of the code, one for the parent process and the second for the child process. What distinguishes these two processes is the result value returned the fork() function call. The parent process ID receives the newly created process number and the child process receives a 0.
if ($pid > 0) { echo "daemon process started
";
exit; }

$sid = posix_setsid(); if ($sid < 0) {
exit 2;
}

chdir('/'); file_put_contents($pidFilename, getmypid() );
run_process();


The implementation of step 5 "to close all file descriptors" can be done in two ways. Well, closing all file descriptors is difficult to implement in PHP. You just need to open any file descriptors before fork(). Second, you can override the standard output to an error log file using init_set() or use buffering using ob_start() to a variable and store it in log file:
ob_start();
var_dump($some_object);
$content = ob_get_clean();
fwrite($fd_log, $content); 


Typically, ob_start() is the start of the daemon life cycle and ob_get_clean() and fwrite() calls are the end. However, you can directly override STDIN, STDOUT and STDERR:
ini_set('error_log', $logDir.'/error.log');
fclose(STDIN); 
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null', 'r');
$STDOUT = fopen($logDir.'/application.log', 'ab');
$STDERR = fopen($logDir.'/application.error.log', 'ab');


Now, our process is disconnected from the terminal and the standard output is redirected to a log file.

Handling Signals

Signal processing is carried out with the handlers that you can use either via the library pcntl (pcntl_signal_dispatch()), or by using libevent. In the first case, you must define a signal handler:
function sig_handler($signo)
{
global $fd_log;
switch ($signo) {
case SIGTERM:
fclose($fd_log); unlink($pidfile); exit;
break;
case SIGHUP:
init_data(); break;
default:
}
}

pcntl_signal(SIGTERM, "sig_handler");
pcntl_signal(SIGHUP, "sig_handler");


Note that signals are only processed when the process is in an active mode. Signals received when the process is waiting for input or in sleep mode will not be processed. Use the wait function pcntl_signal_dispatch(). We can ignore the signal using flag SIG_IGN: pcntl_signal(SIGHUP, SIG_IGN); Or, if necessary, restore the signal handler using the flag SIG_DFL, which was previously installed by default: pcntl_signal(SIGHUP, SIG_DFL);

Asynchronous I/O with Libevent

In the case you use blocking input / output signal processing is not applied. It is recommended to use the library libevent which provides non-blocking as input / output, processing signals, and timers. Libevent library provides a simple mechanism to start the callback functions for events on file descriptor: Write, Read, Timeout, Signal.

Initially, you have to declare one or more events with an handler (callback function) and attach them to the basic context of the events:
$base = event_base_new();
$event = event_new();
$errno = 0;
$errstr = '';
$socket = stream_socket_server("tcp://$IP:$port", $errno, $errstr);
stream_set_blocking($socket, 0); event_set($event, $socket, EV_READ | EV_PERSIST, 'onAccept', $base);


Function handlers 'onRead', 'onWrite', 'onError' must implement the processing logic. Data is written into the buffer, which is obtained in the non-blocking mode:
function onRead($buffer, $id)
{
while($read = event_buffer_read($buffer, 256)) {
var_dump($read);
}
}


The main event loop runs with the function event_base_loop($base);. With a few lines of code, you can exit the handler only by calling: event_base_loobreak(); or after the specified time (timeout) event_loop_exit();.

Error handling deals with failure Events:
function onError($buffer, $error, $id)
{
global $id, $buffers, $ctx_connections;
event_buffer_disable($buffers[$id], EV_READ | EV_WRITE);
event_buffer_free($buffers[$id]);
fclose($ctx_connections[$id]);
unset($buffers[$id], $ctx_connections[$id]);
}


It should be noted the following subtlety: Working with timers is only possible through the file descriptor. The example of official the documentation does not work. Here is an example of processing that runs at regular intervals.
$event2 = event_new();
$tmpfile = tmpfile();
event_set($event2, $tmpfile, 0, 'onTimer', $interval);
$res = event_base_set($event2, $base);
event_add($event2, 1000000 * $interval);


With this code we can have a working timer finishes only once. If we need a "permanent" Timer, using the function onTimer we need create a new event each time, and reassign it to process through a "period of time":
function onTimer($tmpfile, $flag, $interval)
{
$global $base, $event2;

if ($event2) {
event_delete($event2);
event_free($event2);
}

call_user_function(‘process_data’,$args);

$event2 = event_new();
event_set($event2, $tmpfile, 0, 'onTimer', $interval);
$res = event_base_set($event2, $base);
event_add($event2, 1000000 * $interval);
}


At the end of the daemon we must release all previously allocated resources:
event_delete($event);
event_free($event);
event_base_free($base);

event_base_set($event, $base);
event_add($event);


Also it should be noted that for the signal processing handler is set the flag EV_SIGNAL: event_set($event, SIGHUP, EV_SIGNAL, 'onSignal', $base);

If needed constant signal processing, it is necessary to set a flag EV_PERSIST. Here follows a handler for the event onAccept, which occurs when a new connection is a accepted on a file descriptor:
function onAccept($socket, $flag, $base) {
global $id, $buffers, $ctx_connections;
$id++;
$connection = stream_socket_accept($socket);
stream_set_blocking($connection, 0);
$buffer = event_buffer_new($connection, 'onRead', NULL, 'onError', $id);
event_buffer_base_set($buffer, $base);
event_buffer_timeout_set($buffer, 30, 30);
event_buffer_watermark_set($buffer, EV_READ, 0, 0xffffff); event_buffer_priority_set($buffer, 10); event_buffer_enable($buffer, EV_READ | EV_PERSIST); $ctx_connections[$id] = $connection;
$buffers[$id] = $buffer;
}


Monitoring a Daemon

It is good practice to develop the application so that it was possible to monitor the daemon process. Key indicators for monitoring are the number of items processed / requests in the time interval, the speed of processing with queries, the average time to process a single request or downtime.

With the help of these metrics can be understood workload of our demon, and if it does not cope with the load it gets, you can run another process in parallel, or for running multiple child processes.

To determine these variables need to check these features at regular intervals, such as once per second. For example downtime is calculated as the difference between the measurement interval and total time daemon.

Typically downtime is determined as a percentage of a measurement interval. For example, if in one second were executed 10 cycles with a total processing time of 50ms, the time will be 950ms or 95%.

Query performance wile be 10rps (request per second). Average processing time of one request: the ratio of the total time spent on processing requests to the number of requests processed, will be 5ms.

These characteristics, as well as additional features such as memory stack size queue, number of transactions, the average time to access the database, and so on.

An external monitor can be obtain data through a TCP connection or unix socket, usually in the format of Nagios or zabbix, depending on the monitoring system. To do this, the demon should use an additional system port.

As mentioned above, if one worker process can not handle the load, usually we run in parallel multiple processes. Starting a parallel process should be done by the parent master process that uses fork() to launch a series of child processes.

Why not run processes using exec() or system()? Because, as a rule, you must have direct control over the master and child processes. In this case, we can handle it via interaction signals. If you use the exec command or system, then launch the initial interpreter, and it has already started processes that are not direct descendants of the parent process.

Also, there is a misconception that you can make a demon process through command nohup. Yes, it is possible to issue a command: nohup php mydaemon.php -master >> /var/log/daemon.log 2 >> /var/log/daemon.error.log &

But, in this case, would be difficult to perform log rotation, as nohup "captures" file descriptors for STDOUT / STDERR and release them only at the end of the command, which may overload of the process or the entire server. Overload demon process may affect the integrity of data processing and possibly cause partial loss of some data.

Starting a Daemon

Starting the daemon must happen either automatically at boot time, or with the help of a "boot script."

All startup scripts are usually in the directory /etc/rc.d. The startup script in the directory service is made /etc/init.d/ . Run command start service myapp or start group /etc/init.d/myapp depending on the type of OS.

Here is a sample script text:
#! /bin/sh
#
$appdir = /usr/share/myapp/app.php
$parms = --master –proc=8 --daemon
export $appdir
export $parms
if [ ! -x appdir ]; then
exit 1
fi

if [ -x /etc/rc.d/init.d/functions ]; then
. /etc/rc.d/init.d/functions
fi

RETVAL=0

start () {
echo "Starting app"
daemon /usr/bin/php $appdir $parms
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/mydaemon
echo
return $RETVAL
}

stop () {
echo -n "Stopping $prog: "
killproc /usr/bin/fetchmail
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/mydaemon
echo
return $RETVAL
}

case in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status /usr/bin/mydaemon
;;
*)
echo "Usage:
php
if (is_file('app.phar')) {
unlink('app.phar');
}
$phar = new Phar('app.phar', 0, 'app.phar');
$phar->compressFiles(Phar::GZ);
$phar->setSignatureAlgorithm (Phar::SHA1);
$files = array();
$files['bootstrap.php'] = './bootstrap.php';
$rd = new RecursiveIteratorIterator(new RecursiveDirectoryIterator('.'));
foreach($rd as $file){
if ($file->getFilename() != '..' && $file->getFilename() != '.' && $file->getFilename() != __FILE__) {
if ( $file->getPath() != './log'&& $file->getPath() != './script'&& $file->getPath() != '.')
$files[substr($file->getPath().DIRECTORY_SEPARATOR.$file->getFilename(),2)] =
$file->getPath().DIRECTORY_SEPARATOR.$file->getFilename();
}
}
if (isset($opt['version'])) {
$version = $opt['version'];
$file = "buildFromIterator(new ArrayIterator($files));
$phar->setStub($phar->createDefaultStub('bootstrap.php'));
$phar = null;
}
 {start|stop|restart|status}"
;;

RETVAL=$?
exit $RETVAL


Distributing Your PHP Daemon

To distribute a daemon it is better to pack it in a single phar archive module. The assembled module should include all the necessary PHP and .ini files.

Below is a sample build script:
#php app.phar
myDaemon version 0.1 Debug
usage:
--daemon – run as daemon
--debug – run in debug mode
--settings – print settings
--nofork – not run child processes
--check – check dependency modules
--master – run as master
--proc=[8] – run child processes


Additionally, it may be advisable to make a PEAR package as a standard unix-console utility that when run with no arguments prints its own usage instruction:
[NMD%%CODE%%]


Conclusion

Creating daemons in PHP it is not hard but to make them run correctly it is important to follow the steps described in this article.

Post a comment here if you have questions or comments on how to create daemon services in PHP.
11993 views · 5 years ago
Welcome to PHP 7.1

In case you are living under a rock, the latest version of PHP released last week. PHP developers around the world began rebuilding their development containers with it so they can run their tests. Now it’s your turn. If you haven’t already installed it, you can download it here http://php.net/downloads.php Grab it, get it running in your development environment, and run those unit tests. If all goes well, you can begin planning your staged deployment to production.

If you need a quick start guide to get you going, our good friend Mr Colin O’Dell has just the thing for you “Installing PHP 7.1”. It’ll get you up and going quickly on PHP 7.1.

What’s the big deal about PHP 7.1? I am so glad you asked. Here are the major new features released in PHP 7.1.

* Nullable types
* Void return type
* Iterable pseudo-type
* Class constant visiblity modifiers
* Square bracket syntax for list() and the ability to specify keys in list()
* Catching multiple exceptions types

Now if you want a quick intro to several of these new features, check out our “RFCs of the Future” playlist on YouTube. In it, I talk about 4 of the new features.

Oh and while you are watching things download & compile, why not take the time to give a shoutout to all the core contributors, and a special thank you to Davey Shafik and Joe Watkins, the PHP 7.1 release managers.

Cheers!
=C=
11882 views · 5 years ago
Five Composer Tips Every PHP Developer Should Know

Composer is the way that that PHP developers manage libraries and their dependencies. Previously, developers mainly stuck to existing frameworks. If you were a Symfony developer, you used Symfony and libraries built around it. You didn’t dare cross the line to Zend Framework. These days however, developers focus less on frameworks, and more on the libraries they need to build the project they are working on. This decoupling of projects from frameworks is largely possible because of Composer and the ecosystem that has built up around it.

Like PHP, Composer is easy to get started in, but complex enough to take time and practice to master. The Composer manual does a great job of getting you up and running quickly, but some of the commands are involved enough so that many developers miss some of their power because they simply don’t understand.

I’ve picked out five commands that every user of Composer should master. In each section I give you a little insight into the command, how it is used, when it is used and why this one is important.

1: Require

Sample:

$ composer require monolog/monolog


Require is the most common command that most developers will use when using Composer. In addition to the vendor/package, you can also specify a version number to load along with modifiers. For instance, if you want version 1.18.0 of monolog specifically and never want the update command to update this, you would use this command.

$ composer require monolog/monolog:1.18.0


This command will not grab the current version of monolog (currently 1.18.2) but will instead install the specific version 1.18.0.

If you always want the most recent version of monolog greater than 1.8.0 you can use the > modifier as shown in this command.

$ composer require monolog/monolog:>1.18.0


If you want the latest in patch in your current version but don’t want any minor updates that may introduce new features, you can specify that using the tilde.

$ composer require monolog/monolog:~1.18.0


The command above will install the latest version of monolog v1.18. Updates will never update beyond the latest 1.18 version.

If you want to stay current on your major version but never want to go above it you can indicate that with the caret.

$ composer require monolog/monolog:^1.18.0


The command above will install the latest version of monolog 1. Updates continue to update beyond 1.18, but will never update to version 2.

There are other options and flags for require, you can find the complete documentation of the command here.

2: Install a package globally

The most common use of Composer is to install and manage a library within a given project. There are however, times when you want to install a given library globally so that all of your projects can use it without you having to specifically require it in each project. Composer is up to the challenge with a modifier to the require command we discussed above, global. The most common use of this is when you are using Composer to manage packages like PHPUnit.

$ composer global require "phpunit/phpunit:^5.3.*"


The command above would install PHPUnit globally. It would also allow it to be updated throughout the 5.0.0 version because we specified ~5.3.* as the version number. You should be careful in installing packages globally. As long as you do not need different versions for different projects you are ok. However, should you start a project and want to use PHPUnit 6.0.0 (when it releases) but PHPUnit 6 breaks backwards compatibility with the PHPUnit 5.* version, you would have trouble. Either you would have to stay with PHPUnit 5 for your new project, or you would have to test all your projects to make sure that your Unit Tests work after upgrading to PHPUnit 6.

Globally installed projects are something to be thought through carefully. When in doubt, install the project locally.

3: Update a single library with Composer

One of the great powers of Composer is that developers can now easily keep their dependencies up-to-date. Not only that, as we discussed in tip #1, each developer can define exactly what “up-to-date” means for them. With this simple command, Composer will check all of your dependencies in a project and download/install the latest applicable versions.

$ composer update


What about those times when you know that a new version of a specific package has released and you want it, but nothing else updated. Composer has you covered here too.

$ composer update monolog/monolog


This command will ignore everything else, and only update the monolog package and it’s dependencies.

It’s great that you can update everything, but there are times when you know that updating one or more of your packages is going to break things in a way that you aren’t ready to deal with. Composer allows you the freedom to cherry-pick the packages that you want to update, and leave the rest for a later time.

4: Don’t install dev dependencies

In a lot of projects I am working on, I want to make sure that the libraries I download and install are working before I start working with them. To this end, many packages will include things like Unit Tests and documentation. This way I can run the unit Tests on my own to validate the package first. This is all fine and good, except when I don’t want them. There are times when I know the package well enough, or have used it enough, to not have to bother with any of that.

Many packages create a distribution package that does not contain tests or docs. (The League of Extraordinary Packages does this by default on all their packages.) If you specify the --prefer-dist flag, Composer will look for a distribution file and use it instead of pulling directly from github. Of course if you want want to make sure you get the full source and all the artifacts, you can use the --prefer-src flag.

5: Optimize your autoload

Regardless of whether you --prefer-dist or --prefer-source, when your package is incorporated into your project with require, it just adds it to the end of your autoloader. This isn’t always the best solution. Therefore Composer gives us the option to optimize the autoloader with the --optimize switch. Optimizing your autoloader converts your entire autoloader into classmaps. Instead of the autoloader having to use file_exists() to locate a file, Composer creates an array of file locations for each class. This can speed up your application by as much as 30%.

$ composer dump-autoload --optimize


The command above can be issued at any time to optimize your autoloader. It’s a good idea to execute this before moving your application into production.

$ composer require monolog/monolog:~1.18.0 -o


You can also use the optimize flag with the require command. Doing this every time you require a new package will keep your autoloader up-to-date. That having said, it’s still a good idea to get in the habit of using the first command as a safety net when you roll to production, just to make sure.

BONUS: Commit your composer.lock

After you have installed your first package with composer, you now have two files in the root of your project, composer.json and composer.lock. Of the two, composer.lock is the most important one. It contains detailed information about every package and version installed. When you issue a composer install in a directory with a composer.lock file, composer will install the exact same packages and versions. Therefore, by pulling a git repo on a production server will replicate the exact same packages in production that were installed in development. Of course the corollary of this is that you never want to commit your vendor/ directory. Since you can recreate it exactly, there is no need to store all of that code in your repo.

It is recommended that also commit your composer.json. When you check out your repo into production and do an install, composer will use the composer.lock instead of the composer.json when present. This means that your production environment is setup exactly like your development environment.

SPONSORS