Tag Archives: C

_Generic in C And Generic LinkedList Implementation

Tada! Finally a non-Javascript post for those of you who stopped visiting my blog because of my sick Js addiction. So my lost friend Aditya is back, and while talking to him, I got to read a piece of code that he wrote when we were in third semester, last year. The aim was to build a generic container to store different datatypes, in C. So yes, literally storing [21, “abhishek”, 3.142, ‘z’] in a single dynamically allocated linkedlist. That was pretty bold for us then.

For making that happen, he made use of the _Generic macro that C11 introducted for generic type selection. For those unfamiliar with it, I’ll begin with explaining the basics of the macro. For those of you interested in the linkedlist code, scroll down to the bottom of this article. You can always check out the C11 draft [section 6.5.1.1 Generic selection pg #96] (the standard costs around $60 but is essentially the same thing).

The _Generic macro follows the format _Generic(controlling-expression, association-list) where controlling-expression is the expression whose type is to be detected, and association-list is the dictionary of type-expression pairs where type can be any object type and the expression can be any expression, constant or a pointer to any function, pretty much anything that can be evaluated.

To put it simply, compare it to the switch statement for type detection. For example,

#include <stdio.h>
int main() {
  char *teststring = "my char array" // target object
  char *type = _Generic(
    teststring,
    int: "int",
    int*: "int *",
    char: "char",
    char*: "char *",
    float: "float",
    default: "default"
  
  printf("%s", type
  return 0
}

$ gcc main.c
$ ./a.out
char *

The _Generic macro can be wrapped around in a nice looking function call, say detect_type(). Compare this to a (pseudo) switch statement which evaluates the case that matches the switch params

// Pseudocode may resemble Javascript. Reader discretion advised
var someobj = "text here" // target object
var type // to store the typename
type_switch(someobj) {
  case int:
    type = "int"
  case int*:
    type = "int*"
  case char:
    type = "char"
  case char*:
    type = "char*"
  default:
    type = "default"
}
print(type)

So that is it. A compile time type identification in C with a simple macro. We can do pretty amazing stuff with it using pointers to functions. For example, a generic print function, that can take in any type of argument and print it. I have added cases for integers, characters and string, but others can be added just as easily.

That was neat, wasn’t it. Now back to Aditya. He used the same macro to create a dynamically allocated linkedlist. I am pasting the gist here so that you can read his beautiful code.

File linkedlist.c

File linkedlist.h

File main.c

To compile it, run ~$ gcc main.c linkedlist.c -o linkedlist on any machine with GCC installed (I’ve tested it on version >4.9). Run it with ~$ ./linkedlist and you should see the different elements that we’ve added printed on the standard out. You may also pack the linkedlist into a statically linked library, by running.

~$ gcc -c linkedlist.c -o linkedlist.o
~$ ar rcs linkedlist.a linkedlist.o
~$ gcc main.c linkedlist.a

Interesting, isn’t it? Want to thank him, or have any comments or improvements? Drop them into the comment box. Thank you for reading.

External Links

A Day Of Struggle With Python IDEs

Yesterday, I gave up on doing my next web application project with QT. I knew C++ was never meant to be a language of the web, but I really had some hopes with QT. It actually is good, and apparently it would have ran faster than any other platform or language for the web. The problem is, it is a lot time consuming to develop anything in it, especially web apps. I really don’t have all that time. So I decided to do it in Python or Ruby. After reading some articles, it became clear, they are not much different, where python is a more general programming language, ruby is more towards the web, with it’s rails framework.

I choose Python, just because I know how to write code in python, beforehand. It was time to go shopping for some good frameworks to develop this thing. No doubt, finally I had to decide between Django and Flask. I choose flask. It was damn too simple, or at least it seemed like that. I tried the simple hello.py script which displays “hello world” on the localhost port 5000.

I tried that in emacs, but immediately felt the need for an IDE in this foreign land. I looked in my ‘Downloads’ folder, and luckily there was Aptana Studio 3 still sitting there. I used to have it installed when I was into PHP last year. Since then, it got removed and thrown into a corner. I installed it. I really loved Aptana back then, for it’s usability. But now, it started to act like a stubborn child, refusing to detect Flask. I googled and googled, but alas, no way. Many people seemed to be having the same problem, and the only solution I saw didn’t work.

Seeing no way, I uninstalled Aptana, and googled for other good IDEs. PyCharm was what most were recommending. I decided to give it a try. Turned out, it was a memory hogger. Both my CPUs were doing a constant 100% and other windows turned sluggish too. About half a tonne of RAM was what it was utilizing, with a single .py file open with 4 lines of text in it. No way, again. Removed it, and went to eat some food. Damn.

I was not ready to go to the sluggish Eclipse again, nor Netbeans for the same reason. Finally I settled for Komodo Edit, free lite version of the commercial Komodo IDE. It lacks many things that you will ask for in an IDE, and it is a little better than using bare emacs or vim. Still, for now, I am using it. Configured it to execute python script right inside the window following this tutorial, https://stackoverflow.com/questions/21686395/how-to-run-the-first-python-program-in-komodo-edit-8-5

Life’s good, but just hoping to learn flask for my next project as fast as I can.

Time

Time is one of my favorite subjects. The reason being it’s highly mysterious and absolute nature. It give everything a reference frame. How else on earth would have you said when a thing actually occurred. It actually gives even the rest of the three dimensions a reference frame. Newton put forth the theory that nothing in the Universe is absolutely stationary, no not you standing still, because the earth is spinning, and not the earth since it is revolving around the sun. The sun goes round the Milky Way (our home galaxy) once in every 225 million years. Not surprising now, the Milky Way is not stationary itself, but moving towards Andromeda (our neighboring galaxy) at about 400,000 km/h. Point here is that, you cannot determine the absolute position of an object anywhere in the Universe by just giving the three physical dimensions. It simple doesn’t work where there is no frame of reference. Then how do you tell your position? You tell it in terms of space-time. The fourth dimension, that is of the time can be in any unit of time. We can actually convert the first three coordinates  in terms of light-seconds which is distance in terms of time taken by the light to travel the same distance.

But well, time is in fact, only absolute to a person at a particular position. Time is well affected by gravity, the same way light is. This is to simply say the time seems somewhat slower near a body of high mass, like the earth, or a black hole for some serious observations. We don’t actually see these differences in our routine life since they are too small in case of the Earth (Earth is not as dense as it should be for practical effects of delay in time to be observed by us). But there are applications on Earth that require precise measurement of time, and one such application is the Global Positioning System, the GPS in almost every phone these days. It relies on three geostationary satellites that measure the exact time taken for the receiver to reach them, creating a triangular plot, estimating the position of the receiver on the surface on the planet within a couple of meters accurately. These satellites are feed with the slightly wrong time after some unspecified interval, just so that it can adjust with the ‘slower’ time on the Earth’s surface (on account of the larger mass of Earth affecting the bodies near to it to a larger extent and also due to the relatively faster time on these satellites on account of their speeds due to special relativity).

There is another interesting thing to note here. How can you say if you are experiencing a slower version of time than you did some time in the past? You can’t. The reason is, ‘slower’ and ‘faster’ time are for the observers that are experiencing a relatively ‘faster’ or ‘slower’ time. So it is all relative. Talking about time in the context of speed of light, we come to an amazing theory by Einstein. Einstein proposed that the speed at which light travels is absolute and does not depend on the relative speed and position of the observer. To put it simply, imagine you are traveling on a highway at 60km/h. Another vehicle traveling at 70 overtakes you. For you, the relative speed of the other vehicle is 10km/h (which is, 70-60=10). This is how we expect things to behave right? Of course. But things change as one approaches the speed of light. See that light doesn’t behave the same way our highway example cars did. However fast you’re traveling, light will still be 300,000km/s faster than you, even if you are actually doing 290,000km/s (which is just about impossible for a body of large mass like us, but certain elementary particles can achieve those speeds). This can be considered the universal speed limits. So what happens when some particle tries to break this barrier?

For understanding this, let us consider the cone of light. When a particular event occurs, it sends out light (or radiations) in all possible directions. This is like throwing a pebble in a pond of water. It spreads in all possible directions and it’s size increases every instant. The outermost wave is the wave initially created by the pebble itself. Now think of this as light. An event occurs and light from that event spreads in all possible direction. Considering time on y axis and space on x, we can imagine a cone getting created.

https://upload.wikimedia.org/wikipedia/commons/9/9a/World_line.png

Now when something occurs, you don’t see it immediately, until the light from that event hits your retina. Consider the sun, for example. It is at a distance of about 149 million kilometers from the earth. Light takes about 8 minutes and 20 seconds to travel this distance. This is the reason the sunlight you’re seeing now is about 8 minutes and 20 seconds old. Which means that it takes the cone of light from sun, 8 minutes and 20 seconds to reach us, and make us aware of it. Still, considering the distances of other planetary objects, sun is quite close to us. Our nearest star, Alpha Centauri is at a distance of 4.3 light years from us, that is, the cone of light takes 4.3 years from Alpha Centauri to reach us. Since it is the cone of light, not surprisingly, it travels outwards at the speed of light.

Back to our original question, what if some particle tries to break the barrier of speed of light? If that happens, it will move from inside the cone of light of one event to outside of it, effectively seeing the time from the past, hence ‘traveling in the past’. But is that possible? No, according to the special theory of relativity. Since speed of light is absolute to the reference frame it is measured in, nothing can reach it (the mass of an object goes on increasing with increasing velocity. Close to the speed of light, the mass of the object is infinity and pushing an object of mass infinity is not possible). This was even proved by an experiment (Michelson-Morley experiment). This theory was consistent with the observations and most other theories, but Newton’s gravitation theory. If everything travels at speeds less than or equal to the speed of light, how do we explain the effects of gravitation on distant planets and stars instantaneously. Does gravity travel at infinite speeds, as opposed to relativity, which states that gravity travels at speed of light, which is again, not in accordance with solar system observations? Maybe.

Einstein spent a great deal of time trying to find a theory that would be consistent with both, the theory of relativity and the theory of gravitation. Finally he came up with something called the general theory of relativity. This theory suggested that the force of gravity doesn’t act like normal forces but it acts on the ‘space-time’ fabric, bending it in proportion with the relative energy and mass of the body. This suggested that the bodies revolving other bodies like the moon and earth are not traveling round the other body, but actually taking a straight path in the space time fabric which is 4 dimensional, and they only ‘appear’ to travel in elliptical orbits as we see them in 3 dimensions. This was a revolutionary suggestion. It implied that time, light all traveled curved path near objects of high mass or energy. That did, infact explained why time appeared slow near the surface of earth and faster from space, and also, how are we able to see stars that are just behind the sun even if a straight path to them would not be possible for light to take.

Now as we look at where this article started, yes, I was all wrong. I knew there was nothing as absolute space, but as it turned out, there is nothing as absolute time as well. Interesting though, isn’t it?

Time in the world of computers!

Computers happen to be my most favorite subject after astrophysics, so how can I let this special opportunity go without playing around with time functions in C++. To be honest, I had been on this time thing from about 2 weeks. I needed to calculate time precisely for looking at the efficiency of sorting algorithms. Not surprisingly, computers are way too fast for calculating time in seconds or milliseconds. To see the difference in time taken by two algorithms for sorting a set of 50 random numbers, you need to measure time in nanoseconds, at the least microseconds. Although I had trouble (a lot of trouble, in fact) in getting the time functions to work for me, I finally succeeded in getting nanosecond level precision without any extra props, just my PC and g++ compiler.

Starting from the first thought that came to my mind, get some C++ reference for built in functions. time() from <ctime> does a great job to fetch the number of seconds since 1st Jan, 1970,

but not good enough for me here. I knew that the bash command ‘date’ can be used for my purpose, but I couldn’t see a way to get the output of the command to my program back. I did some over smartness here. I executed the system(“date +%s%N > text”) which wrote the current timestamp upto the nanosecond to ‘text’ file. Then copy that time by reading the file into an unsigned long long int (since it is 19 digits long). Then after the calculations, repeat it again and subtract the initial time from the final one. A bit long cut, but I hoped it to work.

When I ran this code, it took, surprisingly 18+ milliseconds, or in particular 18649390 nanoseconds. And to be clear, I haven’t even included any code whose performance benchmarks I had to measure. This is the base time, LOL. Thinking about it for a second, algorithms store variables in cache, because RAM is not fast enough and I tried to write and read from the hard drive. That is so damn clever of me!

Pun aside, Now it was time to get help. Many stackoverflow posts and a good guy’s guidance later, I finally got the clock_gettime() from <sys/time.h> to work. Yes, now I was getting some pretty good numbers and I could see the difference when I added more stuff to the code.

Executing the raw code without any load gave some results I was hoping to see.

210 nanoseconds which is fine for a code that does nothing. This will serve all of my purposes, and should do most of anyone’s who is up with sorting algorithms.

PS – I tried hard to find good sources for information on the first part of this article, still if there is any mistake, don’t hesitate to notify me.

Handling large numbers with C/C++

I will tell you what this is about. I saw a simple looking problem on HackerEarth about finding factorial of given numbers. It looks easy, but another look at the constraints (1<=N<=100) changes everything. Well, not everything, especially if you are going to write it in languages like Python (“dynamically typed”) which has built-in capabilities to handle big numbers, but it is really a trouble to do it in C or any such statically typed language.

So first of all, what exactly is the difference between statically and dynamically typed languages. Dynamically typed languages require their interpreters to detect the type of the variable from the value that is assigned to it. On the other hand, in statically typed languages, the type of the variables must be known at compile time.

Some believe the latter has advantages over the former. As we explicitly state the type, run time errors are reduced and run time performance increases. We won’t get into that discussion here.

So our problem

Before getting into it, I will first write the Python code that worked flawlessly and gave answers to factorials of over 100.

As you can see, it doesn’t get any easier for anybody, no matter how novice he is with programming. But how does one solve the same problem with languages with no native support for big (of order >10^100) numbers? Simple. We make use of algorithms. The first thing that pops in one’s mind when dealing with numbers this large is the use of arrays. Yes, that is the right way to go, or at least the one that worked for me.

So here is the plan. We create an array of integers which will hold a digit in each index, starting with the least significant bit. For example, if we were to store the number 12345 in the array, we would do it like this:

54321

That is, array[0] stores ‘5’, array[1] stores ‘4’ and so on. We have reversed the number for a specific reason. For knowing that reason, you have to go back to your 2nd grade class where you were taught to multiply two two-digit numbers. How did you do that?

4
2
37
x63
111
2220
2331

Got the memory back? Although it may seem a trivial thing now, notice that you never do a multiplication whose result is more than 81, that is, 9×9 which is the product of largest two single digit numbers. So can you somehow make the computer follow the same method to calculate the factorial, such that it never does a >81 digit addition in the entire process, which is well inside the size of the shortest numeric data type in C (unsigned short: 65,535)? Yes, of course. We are coders, right? 😉

To start off, we will need variables. We will use num to accept the input number whose factorial is to be found out. cur stores the result of the calculation i * arr[j] + temp. The least significant digit from cur (for example 3, in case of 123) goes into arr[j] while the remaining digits get stored in the temp variable. Follow the above step till the end of the array which we initially denote by pos variable. pos is initialized and assigned 1, as we initialize our arr[] with arr[0]=1 (since we will be using this value to multiply subsequent 2,3,4…num, we don’t want our answer to evaluate to 0).

After this loop, we will need to empty out the carry forward integer in our temp variable. It will be done in the reverse order as well, but here, we will increment pos to make it always point to the number of digits in arr[].

Finally you can print our the arr[] in the reverse order to get the expected answer and this should not surprise you since we have been doing storing of the numbers in reverse order in our arr[]. Here is the C++ code that I wrote. I didn’t cross check the results for larger values of num, so take care with that.

So that was it for this short article. I am reading more on GMP (GNU Multiple Precision Arithmetic Library) that is written exactly for this purpose. Nevertheless, it is always good to know how to do it by hand. Thank you.

Jumping over multiple programming languages

I just happened to take a look at my own blog. Starting with some posts about python, I went to C, then PHP-MySQL and then Javascript. All of this in around 3 months. Now, it is great to know multiple languages, but the thing I feel is, you must ‘know’ them. Knowing in the sense, the underlying philosophy, best practices and such. I have started with at least 10 languages this year. I can code little bits in all of them, namely C, Java, Python, PHP, Javascript and some other like database, HTML/CSS which are not-so-much-programming languages. Problem is, I cannot code fluently in any of the above listed languages. I just tend to get bored by doing a language after some time and then jump to something else. Something needs to be done, right?

Actually, I didn’t notice this myself. Friends, with whom I hangout, told me that I needed to concentrate on one thing, until I master it, before moving to something else. I couldn’t agree more, but also couldn’t decide what needs to be done. So kept trying different things till I get something that would keep me distracted for long enough.

I started with web development. I hate PHP, although I also tend to code in it the most because it is efficient for quick dirty works. Python happens to be one of my favorite languages, because of the neatness and power. But at the same time, coming from C, I don’t feel the depth in other languages that I got in C.

Web programming is great for some quick compliments. LOL. Just after a week, I could write great pieces of code that would amaze everyone. But deep down somewhere, I didn’t feel good about myself. Its like, choosing the easy path rather than the challenging one. So I thought again, about some other stream. Yes, software programming is pretty good.

So I started with Java. I did learn all the basics and some intermediate stuff. Java is great, but still, I missed C. I needed something close to the OS. I needed a language that I could use to interact with my Arduino. I needed something like C. So why not C again, perfecting it?

Yes, I felt, getting back to C seemed to be the best way to go.

Then, out of nowhere, I got to attend Bjarne Stroustrup‘s lecture at IIT-Bombay. Although I never got to get into the actual seminar, his mere presence was enough to push me into C++. I read about it, and even watch some online videos about C++ and opinion from experts about it. Seemed perfect. The control and power of C, the flexibility of Java and the usability of (almost) Python.

I have now been doing C++ for about 2 weeks. Not much really, basic syntax and stuff. Important thing is, I am enjoying it. I really hope I stick to it for some time now till I master it. I don’t want to be an example of

jack of all trades and master of none

Let’s hope my next post here has some serious C++ in it. 

Python vs C – How simple is it to write a pair of communicating sockets?

Lately I have been reading a lot of articles online written to compare Python to other languages. It is not a secret to anyone that the Python community is growing, and along with it, is the number of people who promote/recommend this language, of course.

Let me not add up to the already large mass of those articles by boasting about Python’s usability, speed and practicality, but rather, I will compare the two languages by writing a small socket client/server pair in each of those languages.

But first, let me give you some of my personal opinions about both the languages since I know them well enough. C is very dear to me, not only because it was the first language I had ever learnt, but also because it runs most of the GNU, and GNU is well, very dear to me! C also happens to be my only second language of choice, after Python (although I know bits of Java, I prefer not to use it, not sure why, but I hate it). I have been programming in Python only from the last couple of months and I was really impressed. I solve HackerEarth and CodeChef problems as a pass time. Although I could do all of the problems I have done in C, doing them in Python took like 1/10th of the time (literally!) and 1/10th the typing effort. I would admit, C is much more fun to write than Python, simply because you ‘feel’ the code is yours, and I love to code C whenever I am free, will I use it in an environment where time is the priority? Probably not. Maybe when C is the only way out, but most of the time, I am better off writing it in Python.

That being said, the popularity of C doesn’t get any less, and it is going to stay that way as long as, maybe the Internet. Here’s something I found.

https://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
You see the thing on top there? Yes, it is there for a reason. To make it short, C is powerful, very powerful. C gives you access to things you can not really imagine in other languages. On the other hand, Python is practical, flexible, and easy to learn. Web apps, sockets, Raspberry Pi, Arduino, Android or anything else you can imagine, there has to be a library made for it by someone, somewhere.

The code part.

I am giving the client and server code in both the languages here as is. No explanation and stuff, because that’s not the topic here. Note that all the source codes are tested running OK on Kali 1.0.6, gcc and all stock stuff, so it should be not much trouble to get it running. Windows guy, search for gcc directory and run it over the command prompt. It won’t run from any IDE.

Python

Writing a pair of communicating TCP sockets require around 30 minutes along with the understanding part, if you have got some background in networking. Python does most of the stuff for you, and you just create a socket variable, supply host and port and that’s it. Rest is left to your imagination (or not, I got too carried away!). Here comes the code:
client.py

import socket
s = socket.socket()
host = socket.gethostname()
port = 1356 
s.connect((host, port))
shit = s.recv(1024)
print shit
s.close()
server.py

s = socket.socket()
host = socket.gethostname()
s.bind((host, 1356))
s.listen(5)
while True:
    c, addr = s.accept()
    c.send("Message from server")
    c.close()

And that is it. Even if it looks lame (which it is), it is the maybe the simplest thing that qualifies to be called a server/client.

C

Now lets write the same in C. This is around 4 times the size of Python code, and much of the stuff are done by hand (nothing new for C, I suppose). This code is the shortest I could cut it to, and just does one simple task. Sends the “Client talking loud!\n” message to server over port 1356 on localhost. The parameters can be edited as per convenience to suit any inter network testing, but that’s the most this code will do. Nevertheless, this is a TCP client/server model.
client.c

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#include <unistd.h>

int main(int argc, char **argv) {
int sock, port, n;
struct sockaddr_in serv_addr;
struct hostent *server;
char buffer[256];
port = atoi("1356");
sock = socket(AF_INET, SOCK_STREAM, 0);
server = gethostbyname("127.0.0.1");
bzero((char *)&serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length);
serv_addr.sin_port = htons(port);
connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
bzero(buffer, 256);
strcpy(buffer, "Client talking loud!\n");
write(sock, buffer, strlen(buffer));
close(sock);
}

server.c

#include <stdio.h> 
#include <strings.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>


int main(int argc, char **argv) {
int sock, nsock, port;
socklen_t clilen;
char buffer[256];
struct sockaddr_in serv_addr, cli_addr;
sock = socket(AF_INET, SOCK_STREAM, 0);
bzero((char*)&serv_addr, sizeof(serv_addr));
port = atoi("1356");
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(port);
bind(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
listen(sock, 2);
clilen = sizeof(cli_addr);
nsock = accept(sock, (struct sockaddr *)&cli_addr, &clilen);
bzero(buffer, 256);
read(nsock, buffer, 255);
printf("%s\n", buffer);
close(sock);
close(nsock);
}

Here is the expected output:

Sorry, there is no commenting in the above code, and it really needs some explanation. I would’ve written them, but then, the code would have grown three folds (LOL)! It will need another nice article to explain all the stuff from that client.c and server.c code. I will conclude here. Thank you for reading 🙂

Update: If you happen to run any of the above code, make sure you run server first!

Simple TCP banner grabber in C

Hello folks, Its been a great week. I got the book called Expert C Programming – Deep C Secrets, by Peter van der Linden. I read about 100 pages in a couple of days, and I had never gained so much confidence reading any other book. Many concepts got sharper, doubts cleared and confidence boasted. A must read book if you know some C and want to understand the nuts and bolts of it.

In the same wave, I decided to write a small utility with sockets later today. It went great. Lots of coding, googling (I just can’t code without Google, maybe a sign of newbie) and debuging. Finally I ended up having a program that was partially correct. It works, but it doesn’t. It actually does more than what is told, and I couldn’t find out why. Still, I am posting it here, for those interested. Please correct it, as I didn’t really get what is wrong with it. Looks like some of the array locations are interpreted as ports, in the ‘ports’ array.

usage example: ./scanner 192.168.1.2 22,80,443

 root@kali:~/Desktop/C/socket# ./client 192.168.1.10 22,80

[+]Testing port: 22
[*]SSH-2.0-OpenSSH_6.0p1 Debian-4

[+]Testing port: 80
[*]<!DOCTYPE HTML PUBLIC “-//IETF//DTD H

TML 2.0//EN”>
<html><head>
<title>501 Method Not Implemented</title>
</head><body>
<h1>Method Not Implemented</h1>
<p>garbage to /index.html not supported.<br />
</p>
<hr>
<address>Apache/2.2.22 (Debian) Server at 127.0.1.1 Port 80</address>
</body></html>

[+]Testing port: 4195840
[-]Error Connecting to port

[+]Testing port: 0
[-]Error Connecting to port

[+]Testing port: 1476291006
[-]Error Connecting to port

[+]Testing port: 32767
[-]Error Connecting to port

I am not sure what that is, the part after the actual banner I mean. I will update this article as soon I get things sorted. Here is the code, if anyone wants to have a look.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>

void scanner(int port, char host[]);

int main(int argc, char **argv) {
char host[100];
char *p;
int ports[10];
int i = 0;
int var;
char tok[] = " ,";

if (argc < 2) {
fprintf(stderr,"[+]usage: %s <hostname> <port,port,port...>n", argv[0]);
exit(0);
}

p = strtok(argv[2], tok);
strcpy(host, argv[1]);
while(p != NULL) {
sscanf(p, "%d", &var);
ports[i++] = var;
p = strtok(NULL, tok);
}

for(i=0; i<(sizeof(ports)/sizeof(ports[0])); i++) {
fprintf(stdout, "n[+]Testing port: %dn", ports[i]);
scanner(ports[i], host);
}
return 0;
}

void scanner(int port, char host[]) {

int sock, n;
struct hostent *server;
struct sockaddr_in serv_addr;

char buffer[4096];

server = gethostbyname(host);

sock = socket(AF_INET, SOCK_STREAM, 0);
/* Edit the params of socket to scan UDP ports,
* should be pretty straight forward I suppose.
*/

if(sock < 0) {
fprintf(stderr, "[-]Error creating socket");
return;
}

bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
// AF_UNIX for Unix style socket

bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length);
serv_addr.sin_port = htons(port);

n = connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
sleep(2);
if(n < 0) {
fprintf(stderr, "[-]Error Connecting to portn");
return;
}

memset(buffer, 0, sizeof(buffer));
strcpy(buffer, "garbagern");

n = write(sock, buffer, strlen(buffer));
if(n < 0) {
fprintf(stderr, "[-]Error writing (Port closed maybe?!)n");
return;
}

bzero(buffer, 4096);
n = read(sock, buffer, 4096);
if(n < 0) {
fprintf(stderr, "[-]Error reading (Port closed maybe?!)n");
return;
}

fprintf(stdout,"[*]%sn", buffer);
close(sock);

}