Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

May 10, 2024

hackergotchi for Daniel Pocock

Daniel Pocock

Security & Debian: Urgent: New Feed URLs after another WIPO censorship

After the recent xz-utils backdoor, I'm going to start looking at the security cover-ups in Debian. There are still tens of thousands of messages about this on debian-private that have not yet been leaked by anybody.

After xz-utils, people have complained that the Debian suicide cluster has been done to death. Readers want to know about Debian competence, or lack thereof, in security. What does debian-private reveal about all this? If you are not one of the people who has been sucked into freeworking for DebianUbuntuGoogle then you might not feel the suicides are relevant to your personal circumstances. Nonetheless, everybody using Debian today is concerned about security, whether you are a full Debian Developer or just an end user.

Please URGENTLY update your feed readers and/or home page so that you don't miss what's coming next.

Some of the security blogs have been timed to coincide with the European Parliament elections. Please read about my candidacy here.

Here are the new feed URLs to replace the censored debian.news. Please URGENTLY change your browser or feed reader before the censors complete the theft of the debian.news domain and kill off this site like all the suicide cluster victims.

URL: uncensored.deb.ian.community

RSS 1.0: https://uncensored.deb.ian.community/rss10.xml

RSS 2.0: https://uncensored.deb.ian.community/rss20.xml

ATOM: https://uncensored.deb.ian.community/atom.xml

10 May, 2024 05:00PM

May 09, 2024

Support for harassment and abuse victims

Two more high profile cases of abuse have been successfully prosecuted yesterday. In one case, the victim has waived her anonymity so that the name of her brother, Karl Ronan, can be published as an offender. The victim didn't have to waive their anonymity as the judge was already sending the offender to jail anyway.

The other case, in County Donegal, involves Catholic priest Eamonn Crossan. In this case, news stories about a previous prosecution had prompted a second victim to come forward many decades after the abuse occurred.

One of the things that strikes me from the case of Karl Ronan is the news headline selected by Ireland's national broadcaster RTE. The victim was quoted telling other victims "you will be believed".

RTE, Karl Ronan, Lynn Ronan

The sad reality is that around Ireland and around the world, victims and potential witnesses are not always treated with respect.

Based on personal experience of these cases and the people involved, I feel that witnesses and victims need to exercise some caution in deciding who to trust. In some cases, the first contact they make may not provide a suitable response and they may need to exercise more courage to reach out to somebody else.

While the assistance to victims has improved in recent decades, I still feel that the statement "you will be believed" hides the amount of effort some victims have to make to find somebody who does believe them.

Victims will not always go to the justice system. They may disclose something to a family member, school teacher, doctor, trade union or even an elected representative.

I'm personally aware of various cases where victims and witnesses reported to doctors and didn't receive appropriate care. In one case reported by the press, the doctor, a former classmate, began a relationship with the victim. It involved a pregnancy and an abortion. The doctor's peers granted him a one year suspension from practice.

The unredacted report about abuse in the Archdiocese of Melbourne tells us that various victims reported to other priests. Some of the other priests collaborated to undermine these victims and help the offenders.

In the recent proceedings before the World Intellectual Property Organization, I informed the Legal Case Manager at WIPO in Geneva that the complainant has been trolling me with references to abuse. Carla shared details of her history with an eating disorder and I had mentioned to female victims in Albania that my cousin participated in the choir of Cardinal George Pell in Melbourne, Australia.

In relation to Carla, research suggests that women who developed eating disorders and other self-harm tendencies during childhood are possible abuse victims. In relation to those in proximity to the choir, while Cardinal Pell's conviction was eventually overturned, it remains unexplained how a member of the choir developed a heroin addiction during the Pell era. Due to the intense efforts to protect the privacy of choir members, the media have failed to join the dots and confirm that at least two known pedophiles and a priest responsible for the cover-up arrangement really were in that orbit.

Family and friends observed the online mobs spreading references to harassment, abuse and anonymous victims since the conviction of Cardinal Pell in 2018. People feel sick when exposed to privacy violations like this.

In both the Ronan case and the Crossan case, it has taken decades for the victims to come forward and seek justice. It seems like a logical conclusion that anybody in proximity to the Cardinal Pell choir could also discover that they are a witness to relevant facts at any time in the future. Despite the fact that the Cardinal and other offenders are deceased, there has been significant progress in civil litigation. Any former choir member could come forward at any time seeking an apology or redress from the church as an institution.

Being in proximity to this high profile case, I want to emphasize the total contempt shown for my family by the Legal Case Manager at the World Intellectual Property Organization in Geneva. Then I will go on to document some of the confirmed facts about my relationship with these high profile abuse cases and the interaction with victims.


Subject: D2024-0770 Debian / victims and witnesses to abuse
Date: Sat, 13 Apr 2024 22:04:56 +0100
From: Daniel Pocock 
To: Domain Disputes 


The insults that WIPO transmitted to me are very disturbing for me as a witness in relation to real cases of harassment and abuse.

Blackmail, defamation, shame and abuse often go together.

Please kindly advise me what assistance WIPO provides to victims and witnesses of abuse when you blackmail us to participate in your administrative procedures

The Legal Case Manager has a standard cut-and-paste reply to all queries. The Legal Case Managers employed by WIPO appear to be law graduates early in their careers. It appears that they have been given no training whatsoever in the rights of potential victims and witnesses to abuse. This female lawyer, Stanislava, is hiding her last name so we are unable to check her credentials at the bar association. The lawyers hiding their names like this remind me of the masked men with Russian accents who occupied Crimea in 2014.


Subject: (ILS) D2024-0770 <debian.chat> et al. Acknowledgement of Receipt
Date: Tue, 16 Apr 2024 16:13:25 +0000
From: Disputes, Domain <domain.disputes@wipo.int>
To: Daniel Pocock <daniel@pocock.pro>, <daniel@softwarefreedom.institute>
CC: <alessio.canova@adv-ip.it>

Dear Respondent, 

This is to acknowledge receipt of your below email communication on April 13, 2024 by the WIPO Arbitration and Mediation Center (the Center). 

The Center will forward your communication to the Panel, (when appointed). 

Sincerely,

Stanislava I.
Legal Case Manager
_______________________________________________________________________________
WIPO Arbitration and Mediation Center
34, chemin des Colombettes, 1211 Geneva 20, Switzerland T +41 22 338 82 47 F +41 22 740 37 00 E domain.disputes@wipo.int W www.wipo.int/amc 
* Please cite "(ILS) WIPO Case #" in the subject line. Thank you.

In the response created by the legal panel, the lawyer W. Scott Blackmer appears to be mocking my concerns in the final lines of his order to seize domain names used for spreading " critical commentary".

What is "critical commentary" and why is it a sin before WIPO? We can think back to the time Galileo dared to suggest the earth was not flat. WIPO's definition of "critical commentary" is a lot like the definition of blasphemy in medieval society.

Fact checking the series of events in harassment and abuse cases

I have obfuscated the names of the conspirators with pseudonyms like Father X___ and Father Y___. This is the same technique used by the Swiss financial regulator FINMA to protect the names of rogue lawyers in the Juristgate affair.

1962: the Pope published Crimen Sollicitationis. It is a procedure for investigating abuse outside the real justice system of the state.

1980s: Father X___ moves to a parish close to St Patrick's cathedral and becomes involved in administrative matters for fellow priests.

1980s: church becomes aware that Father Y_ is an offender

1980s: [ redacted in full ]

1994: Father X___ personally involved in exfiltration of Father Z___ from Boston back to Melbourne.

1995: letter confirms that Father X___ and Father Z___ are now housemates.

1996: from minutes of the Personnel Advistory Board:

Father X___ raised the question of how much is told to whom

1997: boy who subsequently dies was seen by Royal Children's Hospital in relation to poor behavior at school. Various people in the online mobs subsequently repeating references to "poor behavior".

1998: boy who subsequently dies observed experimenting with drugs at age 13 or 14.

1999: WIPO proposes the use of an administrative procedure, the UDRP, where people can argue about the use of trademarks in domain names. The parties can use the procedure to insult and defame each other. Notably, the procedure operates outside of the court system, a lot like the Canon law procedures described by the church in Crimen Sollicitationis are operating outside the state. Administrative procedures neglect a range of topics including privacy, victim support and the provision of legal aid for private bloggers.

2002: Spotlight investigators examine documents from the court house in Boston and find evidence about the communication with Father X___ in Melbourne.

2010: a key volunteer in the Debian software project sends a resignation note on the night before Debian Day and then he commits suicide.

Mark Shuttleworth indicates he is aware of the situation and there is a high risk of copycat suicides. The question of whether a death is foreseeable is crucial in assessing both criminal and civil liability in relation to subsequent deaths. the email from Shuttleworth clearly anticipates more deaths.

2011: the next notable Debian death occurs 8 months later and on the same day Carla and I got married. The volunteer died in Switzereland so the coroner's report and cause of death were never published.

2013: I resigned my membership of the Australian Labor Party (ALP), citing the abuse of female asylum seekers from Iran and the similarity to abuse in the Catholic Church. The resignation was published by political news site Crikey.

2014: former member of the choir dies from heroin overdose. The addiction began during his time in the choir for reasons that have never been confirmed by any of the trials or the Royal Commission.

2015: the Spotlight biographical film is released in cinemas. Any comments I made about abuse prior to this could not have been influenced by the film.

2016: a woman goes to Dr A___ for assistance after a sexual assault. Dr A___ gets her pregnant and then persuades her to have an abortion. Coincidentally, Dr A___ and I were classmates many years ago.

2017: the FSFE fellowship elected me as their representative on April 24, the anniversary of the Easter Rising. Women began making reports to me about abuse in non-profit organizations receiving funds for the promotion of women in technology.

Here is the internal complaint about the harassment. The date is 12 October 2017 so the misfits publishing alternative statements about harassment are lying. I have redacted the section that identifies underage victims.

The next internal email from Larissa Shapiro at Mozilla admits that kids are at risk.

Emma Irwin from Mozilla admits this is a serious matter and asks me to speak to Marta, Mozilla's HR investigator.

It was around this time that I confided in some of the women that I had a family connection with the choir of Cardinal George Pell and that I was watching these matters very carefully.

2018: one of the women writes an email thanking me for my support to victims of harassment and abuse.

2018: Dr A___'s peers suspend him from medical practice. The duration of the suspension is 12 months.

2018: Dr Norbert Preining reaches out to me when people start using secret punishments, analogous to abuse, to blackmail him to be more docile.

2018: I publicly expressed support for Dr Preining.

On Christmas Eve, some of the men complicit in the dark network begin spreading rumors about abuse by email.

2019: the same men begin spreading rumors about abuse through source code repositories.

Mozilla has refused to publish their final report about the abuse. It is very clear from the emails written by the women that they thanked me for my support.

In February, Cardinal Pell was sent to prison. I rang a former employee of the diocese in his nursing home and made some queries about the case. That was the last phone call with my father before he died.

2021: the FSFE management has a shortage of adult volunteers and now they are offering a prize enticing children to work for free. They call the program Youth Hacking 4 Freedom (YH4F). I was the last person the FSFE Fellowship community elected as their representative. I published a blog post denouncing the YH4F program for the risk of child labor.

2022: IBM Red Hat, one of the main sponsors of the FSFE, begins a UDRP complaint through WIPO. In their bundle of evidence, they submit my article about "Google, FSFE and Child Labor" as the basis for their concerns. (their evidence bundle).

The legal panel rules that IBM Red Hat was using the UDRP to harass me.

2022: on the anniversary of the September 11 attacks, an employee of ETH Zurich files a criminal speech demand with the Swiss authorities seeking to have them shut down my company servers to destroy evidence I published about the blackmail, shaming and suicide cluster.

Coincidentally, IBM and Google have chosen Zurich for their most significant research and development centers in Europe. They are both in close proximity to ETH Zurich.

2023: on 5 January, those who observe challenges in the church are surprised to see Cardinal Pell back in the news talking about the death of Pope Benedict. The Cardinal had not been seen in public since his successful appeal in Australia.

I began making a fresh review of the evidence about Debian, FSFE, Google, Mozilla, Ubuntu and the abuse reports from young women, volunteers and the suicide cluster.

At the same time, I was looking at the belatedly unredacted report about the Archdiocese of Melbourne. Despite the fact that the church is a very old institution and the tech industry is very new in comparison, I was surprised by the similarity in the tactics used to keep victims from speaking up about their experiences.

On 10 January, I traveled to Italy and made the police report about the similarities in the shame felt by victims of Catholic abuse and the shame felt by unpaid volunteers subject to secret punishments in the tech industry. The afternoon I was meeting with the Carabinieri was the same afternoon that Cardinal Pell was having his surgery. Sadly, the Cardinal did not survive the surgery.

Cardinal George Pell, Enrico Zini

2024: in March, the Debianists begin harassing me with another UDRP demand. They accuse me of publishing "critical commentary". They don't dispute the authenticity of the critical commentary.

The WIPO legal panel, W. Scott Blackmer, writes a condescending response where he demonstrates extraordinary bias. Specifically, he fails to give any acknowledgement to the co-existence of copyright with trademark rights. He mocks me for not being a lawyer and then mocks my family and I with another reference to abuse.

My feeling is that W. Scott Blackmer has not acted independently. There is a cabal of intellectual property lawyers who appear to be colluding to work around the interests of individual, personal copyright holders in open source software projects. The cabal has created the image that these legal panels operate independently and ethically.

In reality, the lawyers who submit these censorship demands to WIPO and the lawyers who act as "legal panels" to adjuciate upon them are networking with each other behind the scenes. They are using forums such as the FSFE Legal & Licensing Workshop (LLW) and the FSFE Legal Network. I am reminded of those priests who heard reports about abuse and communicated with their colleagues behind the scenes to frustrate justice for the victims who trusted them. Nobody should trust the logic being used to seize domain names that haven't even been used yet.

The irony is that both Crimen Sollicitationis (the Catholic Church) and the WIPO UDRP are administrative procedures and both of these procedures, despite operating in very different contexts, serve to frustrate, denounce and discredit those who are seeking to be believed.

Hence my reservations about the quote selected in the RTE headline, "you will be believed".

In the vast majority of cases, the victims and witnesses to any type of abuse and exploitation have an extremely difficult road ahead of them.

Please see full details of my candidacy for the European Parliament 2024.

09 May, 2024 12:00PM

May 08, 2024

ESB warns Irish election candidates about risky behavior

ESB has circulated the email below to participants in the Irish elections. About 15 minutes after receiving the email, I came across a crew from Fine Gael with their ladder propped up against one of the poles.

Further down the road, I came across another one. Photos below.

Rural Ireland has various types of pole, including the Eircom (PSTN) poles, ESB (electric) poles, street lighting poles and road signs.

I completed my undergraduate engineering studies while working for one of Australia's leading workplace safety and rehabilitation experts. My own opinion on this subject is that it isn't reasonable to expect political party volunteers to know which pole is which. The use of ESB poles clearly presents a risk of electrical shock. The modification of road signs risks distraction for drivers. The easiest thing to do would be to prohibit the installation of signs on any pole whatsoever.

Looking at the ESB pole in the photo below, there are two supplies that are running down the side of the pole and taking an underground route into the adjacent premises. This is increasingly common in Ireland and it increases the risk for those who make contact with the pole.

While most people enjoyed a long weekend from 4 to 6 May, the Garda spent the weekend running a road safety campaign. Their work is undermined by those who compete for the attention of drivers passing the road signs.

Next weekend, 11 May, the national suicide prevention charity Pieta has their annual fundraising Darkness into Light walk at sunrise. Ironically, the main sponsor is Electric Ireland.

Jesus Saves

Jesus Saves

Don't do this

Sinn Fein, ESB pole
Subject: 	ESB Networks warn of risks associated with erecting posters on electricity poles ahead of Local and European Elections
Date: 	Wed, 8 May 2024 18:32:38 +0000
From: 	Murphy. Sean J (Strategy Innovation and Transformation) 

Dear Candidate for the European Parliament elections 2024,

For your information, please find below a Press Release issued by ESB Networks earlier this month in advance of the local and European elections in June 2024.

We share it with you to inform you and your party activists and poster hangers of the very real hazards and risks associated with hanging posters on electrical infrastructure. ESB Networks electricity poles can be recognised by the ‘Lightning Strike’ logo that appears on all ESB Networks Assets.

Please also be advised that posters that are hung inappropriately will have to be removed by local ESB Networks colleagues. This removal may result in the interruption of electricity supply to households and businesses in order to safely remove posters.

*/ESB Networks warn of risks associated with erecting posters on electricity poles ahead of Local and European Elections/*

*//*

  * */The erection of posters on electricity infrastructure is strictly
    prohibited for safety reasons/*
  * */ESB Networks has previously been required to interrupt the
    electricity supply to households and businesses in order to safely
    remove posters/*

/ESB Networks wish to remind all Groups and Parties involved in the upcoming Local and European elections that the erection of posters on electricity poles is strictly prohibited and poses a serious safety risk to members of the public as well as ESB Networks staff and contractors./

/Hazardous situations have been created in the past by people erecting posters on live electricity poles. ESB Networks’ wires and equipment are always live.  Attaching anything to electricity poles exposes you to the risk of electric shock, burns and falling from a height.  Posters attached to poles have caused poles to catch fire and fall. It is never safe to interfere with electricity equipment./

//

/Posters that are erected on electricity poles will be removed by ESB Networks and the costs incurred may be recovered from the respective Parties and Groups involved./

//

*/Speaking ahead of the upcoming elections Michael Murray, ESB Networks Public Safety Manager, said: /*/“ESB Networks regularly advise members of the public to always stay clear of electricity poles and wires through our various campaigns. It is important that these messages are taken on board in the interest of safety. ESB Networks has previously been required to interrupt the electricity supply to households and businesses in order to safely remove posters.”/

/You should always stay safe and stay clear of electricity wires and cables as these are always live and potentially dangerous. If you see a potentially dangerous situation or in the event of an emergency involving the electricity network, please contact ESB Networks on our 24/7 emergency phone number: 1800 372 999./

*/ENDS//*

Sincerely,

Seán Murphy

Manager, Public Affairs

Stay away from the poles

ESB pole, sheep

Campaigns in rural France also focus on road signage

France, village signs

Please see full details of my candidacy for the European Parliament 2024.

08 May, 2024 10:30PM

May 06, 2024

World Press Freedom Day: WIPO censors Debian suicide cluster

On Friday, 3 May 2024, World Press Freedom Day, the WIPO-affiliated lawyer W. Scott Blackmer has signed a UDRP domain seizure decision ( case D2024-0770) that principally aims to censor the web sites Debian.News (new site at uncensored.deb.ian.community) and Debian.Day (backup copy) and Techrights backup copy), the key email leaks of the Debian Day volunteer suicide.

W. Scott Blackmer, WIPO, UDRP

There are many more backup copies of these sites now. The web sites and their content appear to be completely legitimate. This is all about spending $120,000 on lawyers to write insults about a volunteer on the WIPO web site. In fact, there are another 2,500 web sites with the trademark Debian in their domain name and there has been no effort to censor any of them. This is obviously a very targetted personal attack on my family and I.

Here are all those other 2,500 domains that were not censored / seized:

Debian domains

A previous UDRP verdict has confirmed that my family and I are the real victims of harassment. That was the verdict for WeMakeFedora.org in 2022. Ever since then, these evil people have spent these enormous sums of money seeking a revenge by the UDRP.

The protagonist in the WeMakeFedora.org harassment was IBM Red Hat. We note that the lawyer in this case, W. Scott Blackmer, recently made another domain seizure favourable to IBM in the case D2022-1717. In this case, the domain names had not been used to publish any web sites at all. They were seized and W. Scott Blackmer has mischievously made the accusation of "bad faith" because of his suspicions about the content on holding pages. It is dubious whether anybody actually looked at the holding pages before IBM opened the dispute. Therefore, I feel that W. Scott Blackmer has a bias towards these companies and his opinions can't be trusted.

We need to bypass a lot of the wordy insults from the lawyer and focus on one line about the web sites:

used for critical commentary by initially impersonating the Complainant

So there is nothing illegal on these websites. These web sites only contained critical commentary.

Using a trademark in a domain name is not impersonation. This is an extremely zealous definition of impersonation. All the web sites contained a reference to the owner of the trademark and that is also mentioned in the censorship decision:

the Respondent’s websites and links to the Complainant’s Debian trademark policy

Legitimate co-authors can't be accused of impersonation anyway.

Moreover, the section on legitimate interests makes no references to our rights as joint authors. This is a regression from previous WIPO UDRP cases where the copyright interests of authors were the basis for legitimate interests in using a trademark in a domain name. One of the most prominent examples of that precedent was the case of scientologie.org where a WIPO panel ruled that the person having a copyright interest in the Scientologie book could not be extinguished by the Church of Scientology trademark. ( the Scientologie.org precedent, which is now being ignored by WIPO. Coincidentally, Scientology also has a suicide cluster and a lot of these legal disputes are about hiding the articles about the Debian suicide cluster)

The University of Western Australia Law Journal recently published an article Copyright Nazi Plunder: How the Nazis Aryanized Jewish Works. The WIPO procedure to censor web sites for "criticism" is not a criminal procedure and it is not even a civil law procedure. It is an administrative procedure. The Copyright Nazi Plunder report tells us:

Despite the fact that written IP legislation in Nazi Germany did not include specific exclusions for Jewish applicants and authors, in practice, they were excluded by administrative measures alone rather than legal ordinances.

W. Scott Blackmer and WIPO's allegation that Debian Developers, as co-authors, have no legitimate interests inevitably leaves me feeling a similarity to the Nazis bypassing Jewish copyright.

The censorship / domain name seizure decision was transmitted by WIPO to the various parties on 6 May 2024, which happens to be Holocaust Martyrs' and Heroes' Remembrance Day.

Subject: (ILS) D2024-0770 <debian.chat> et al. Notification of Decision
Date: Mon, 6 May 2024 [Holocaust Martyrs' and Heroes' Remembrance Day]
From: Domain.Disputes@wipo.int
To: udrp@support.gandi.net, icann@gandi.net, alessio.canova@adv-ip.it, ...
CC: udrp@icann.org

ARBITRATION AND MEDIATION CENTER

      WIPO Arbitration and Mediation Center

	WIPO Logo
------------------------------------------------------------------------
May 6, 2024 [Holocaust Martyrs' and Heroes' Remembrance Day]

*Re: Case No. D2024-0770

Ultimately, with over 2,500 domains using permutations of the trademark Debian, why did debian.day (backup) make people so angry that they spent $120,000 to censor it while leaving all the other 2,500 web sites intact? Am I really so special to them in some way? Probably not. It is all about hiding the Debian suicide cluster.

We can't forget the deeply personal impact of these Debian suicides and other deaths on my family. One of the volunteers has died on our wedding day and the coroner's report is not being made public. Abraham Raji died in the middle of these legal vendettas. Despite paying over $120,000 to lawyers, they had asked volunteers like Abraham Raji to pay extra money to participate in the kayak trip at DebConf. Raji decided not to pay the extra money, he was left alone, he was left without supervision, he was left without a lifejacket and he died. I found the following text on Abraham Raji's web site "Why Debian":

There is no room for underhand corporate deals, no unfair treatment behind private mails and everything can be reviewed by the public.

It looks like Abraham Raji was well and truly fooled by this gulag and it may have been a factor in his death too.

Please see my chronological history of how the Debian harassment and abuse culture evolved.

06 May, 2024 09:30PM

May 05, 2024

Reference letter: UBS Investment Bank, Zurich

I've worked on a number of foreign exchange trading and treasury projects for different banks.

At the time of the credit crisis from 2007-2008, financial regulators were keen for banks to increase scrutiny of the trades executed by their traders.

Prior to this, traders could make several trades throughout the course of the day and the bank would only become aware of the trades some hours later or in the evening. New regulations required banks to signal the trades to their counterparties within minutes rather than hours.

UBS contracted me to implement these messaging systems for a range of FX and FX derivative trades. The letter below explains more about how I assisted them to improve their systems at this important time.

UBS, Daniel Pocock

05 May, 2024 11:30PM

April 02, 2015

hackergotchi for Rudy Godoy

Rudy Godoy

No SSH for you – how to fix OpenRG routers

I find interesting when faced a taken-for-granted situation, in particular in the tech world. This time’s history is about an Internet ADSL connection that allows all traffic but SSH. Yes, you read it correctly. It was very bizarre confirming such event was a real-life situation.

I don’t claim to be a networking expert but at least I want to think I’m well educated. After few minutes I’ve focused my efforts on dealing with the ADSL router/modem’s networking configuration. The device is provided by Movistar (formerly Telefonica) and it runs OpenRG. I’ve discovered that have the same issue and what Movistar did was basically replacing the device. Of course the problem is gone after that.

So, this post is dedicated to those who don’t give up. Following the steps below will allow SSH outbound traffic for a OpenRG-based device.

OpenRG device specs

Software Version: 6.0.18.1.110.1.52 Upgrade
Release Date: Oct 7 2014

Diagnostic

When you do the command below, it shows nothing but timeout. Even when you SSH the router it doesn’t establish connection to it.

ssh -vv host.somewhere.com

Solution

Change router’s SSH service port.

This step will allow you to access the console-based configuration for the router (since I haven’t found any way to do the steps described below from the web management interface).

To do so, go to System > Management > SSH. Update the service port to something else than 22, for instance 2222.

OpenRG SSH service configuration
OpenRG SSH service configuration

Connect to the SSH interface

Once you have changed the SSH service port, you can access it from a SSH client.

ssh -p 2222 [email protected]
[email protected]'s password: 
OpenRG>

Once you have the console prompt, issue the following commands to allow SSH outbound traffic coming from the LAN and Wifi networks. After the last command, which saves and updates the device’s configuration, you should be able to do SSH from any computer in your network to the Internet (thanks to ).

OpenRG> conf set fw/policy/0/chain/fw_br0_in/rule/0/enabled 0

Returned 0
OpenRG> conf set fw/policy/0/chain/fw_br1_in/rule/0/enabled 0

Returned 0
OpenRG> conf reconf 1

Returned 0
]]>

02 April, 2015 10:46PM

December 20, 2014

Apache Phoenix for Cloudera CDH

is a relational database layer over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets.

What the above statement means for developers or data scientists is that you can “talk” SQL to your HBase cluster. Sounds good right? Setting up Phoenix on Cloudera CDH can be really frustrating and time-consuming. I wrapped-up references from across the web with my own findings to have both play nice.

Building Apache Phoenix

Because of dependency mismatch for the pre-built binaries, supporting Cloudera’s CDH requires to build Phoenix using the versions that match the CDH deployment. The CDH version I used is CDH4.7.0. This guide applies for any version of CDH4+.

Note: You can find CDH components version in the “CDH Packaging and Tarball Information” section for the “Cloudera Release Guide”. Current release information (CDH5.2.1) is available in this .

Preparing Phoenix build environment

Phoenix can be built using maven or gradle. General instructions can be found in the “” webpage.

Before building Phoenix you need to have:

  • JDK v6 (or v7 depending which CDH version are you willing to support)
  • Maven 3
  • git

Checkout correct Phoenix branch

Phoenix has two major release versions:

  • 3.x – supports HBase 0.94.x   (Available on CDH4 and previous versions)
  • 4.x – supports HBase 0.98.1+ (Available since CDH5)

Clone the Phoenix git repository

git clone https://github.com/apache/phoenix.git

Work with the correct branch

git fetch origin
git checkout 3.2

Modify dependencies to match CDH

Before building Phoenix, you will need to modify the dependencies to match the version of CDH you are trying to support. Edit phoenix/pom.xml and do the following changes:

Add Cloudera’s Maven repository

+    <repository>
+        <id>cloudera</id>
+        https://repository.cloudera.com/artifactory/cloudera-repos/
+    </repository>

Change component versions to match CDH’s.

     
-    <hadoop-one.version>1.0.4</hadoop-one.version>
-    <hadoop-two.version>2.0.4-alpha</hadoop-two.version>
+    <hadoop-one.version>2.0.0-mr1-cdh4.7.0</hadoop-one.version>
+    <hadoop-two.version>2.0.0-cdh4.7.0</hadoop-two.version>
     <!-- Dependency versions -->
-    <hbase.version>0.94.19
+    <hbase.version>0.94.15-cdh4.7.0
     <commons-cli.version>1.2</commons-cli.version>
-    <hadoop.version>1.0.4
+    <hadoop.version>2.0.0-cdh4.7.0
     <pig.version>0.12.0</pig.version>
     <jackson.version>1.8.8</jackson.version>
     <antlr.version>3.5</antlr.version>
     <log4j.version>1.2.16</log4j.version>
     <slf4j-api.version>1.4.3.jar</slf4j-api.version>
     <slf4j-log4j.version>1.4.3</slf4j-log4j.version>
-    <protobuf-java.version>2.4.0</protobuf-java.version>
+    <protobuf-java.version>2.4.0a</protobuf-java.version>
     <commons-configuration.version>1.6</commons-configuration.version>
     <commons-io.version>2.1</commons-io.version>
     <commons-lang.version>2.5</commons-lang.version>

Change target version, only if you are building for Java 6. CDH4 is built for JRE 6.

           <artifactId>maven-compiler-plugin</artifactId>
           <version>3.0</version>
           <configuration>
-            <source>1.7</source>
-            <target>1.7</target>
+            <source>1.6</source>
+            <target>1.6</target>
           </configuration>

Phoenix building

Once, you have made the changes you are set to build Phoenix. Our CDH4.7.0 cluster uses Hadoop 2, so make sure to activate the hadoop2 profile.

mvn package -DskipTests -Dhadoop.profile=2

If everything goes well, you should see the following result:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Phoenix .................................... SUCCESS [2.729s]
[INFO] Phoenix Hadoop Compatibility ...................... SUCCESS [0.882s]
[INFO] Phoenix Core ...................................... SUCCESS [24.040s]
[INFO] Phoenix - Flume ................................... SUCCESS [1.679s]
[INFO] Phoenix - Pig ..................................... SUCCESS [1.741s]
[INFO] Phoenix Hadoop2 Compatibility ..................... SUCCESS [0.200s]
[INFO] Phoenix Assembly .................................. SUCCESS [30.176s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:02.186s
[INFO] Finished at: Mon Dec 15 13:18:48 PET 2014
[INFO] Final Memory: 45M/1330M
[INFO] ------------------------------------------------------------------------

Phoenix Server component deployment

Since Phoenix is a JDBC layer on top of HBase a server component has to be deployed on every HBase node. The goal is to have Phoenix server component added to HBase classpath.

You can achieve this goal either by copying the server component directly to HBase’s lib directory, or copy the component to an alternative path then modify HBase classpath definition.

For the first approach, do:

cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/cloudera/parcels/CDH/lib/hbase/lib/

Note: In this case CDH is a synlink to the current active CDH version.

For the second approach, do:

cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/phoenix/

Then add the following line to /etc/hbase/conf/hbase-env.sh

/etc/hbase/conf/hbase-env.sh
export HBASE_CLASSPATH_PREFIX=/opt/phoenix/phoenix-3.2.3-SNAPSHOT-server.jar

Wether you’ve used any of the methods, you have to restart HBase. If you are using Cloudera Manager, restart the HBase service.

To validate that Phoenix is on HBase class path, do:

sudo -u hbase hbase classpath | tr ':' '\n' | grep phoenix

Phoenix server validation

Phoenix provides a set of client tools that you can use to validate the server component functioning. However, since we are supporting CDH4.7.0 we’ll need to make few changes to such utilities so they use the correct dependencies.

phoenix/bin/sqlline.py:

sqlline.py is a wrapper for the JDBC client, it provides a SQL console interface to HBase through Phoenix.

index f48e527..bf06148 100755
--- a/bin/sqlline.py
+++ b/bin/sqlline.py
@@ -53,7 +53,8 @@ colorSetting = "true"
 if os.name == 'nt':
     colorSetting = "false"
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
     '" -Dlog4j.configuration=file:' + \
     os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
     " sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver \

phoenix/bin/psql.py:

psql.py is a wrapper tool that can be used to create and populate HBase tables.

index 34a95df..b61fde4 100755
--- a/bin/psql.py
+++ b/bin/psql.py
@@ -34,7 +34,8 @@ else:
 # HBase configuration folder path (where hbase-site.xml reside) for
 # HBase/Phoenix client side property override
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
     '" -Dlog4j.configuration=file:' + \
     os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
     " org.apache.phoenix.util.PhoenixRuntime " + args

After you have done such changes you can test connectivity by issuing the following commands:

./bin/sqlline.py zookeeper.local
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:zookeeper.local none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:zookeeper.local
14/12/16 19:26:10 WARN conf.Configuration: dfs.df.interval is deprecated. Instead, use fs.df.interval
14/12/16 19:26:10 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
14/12/16 19:26:10 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:10 WARN conf.Configuration: topology.script.number.args is deprecated. Instead, use net.topology.script.number.args
14/12/16 19:26:10 WARN conf.Configuration: dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode
14/12/16 19:26:10 WARN conf.Configuration: topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
14/12/16 19:26:11 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
Connected to: Phoenix (version 3.2)
Driver: PhoenixEmbeddedDriver (version 3.2)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
77/77 (100%) Done
Done
sqlline version 1.1.2
0: jdbc:phoenix:zookeeper.local>

Then, you can either issue SQL-commands or Phoenix-commands.

0: jdbc:phoenix:zookeeper.local> !tables
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
|                TABLE_CAT                 |               TABLE_SCHEM                |                TABLE_NAME                |                TABLE_TYPE |
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
| null                                     | SYSTEM                                   | CATALOG                                  | SYSTEM TABLE              |
| null                                     | SYSTEM                                   | SEQUENCE                                 | SYSTEM TABLE              |
| null                                     | SYSTEM                                   | STATS                                    | SYSTEM TABLE              |
| null                                     | null                                     | STOCK_SYMBOL                             | TABLE                     |
| null                                     | null                                     | WEB_STAT                                 | TABLE                     |
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
]]>

20 December, 2014 03:40PM

February 14, 2014

Getting Movistar Peru ZTE MF193 work in Debian GNU/Linux

After so many attempts to have my shiny Movistar Peru (Internet Móvil) 3G ZTE MF193 modem to work out-of-the-box in Debian jessie (unstable) with NetworkManager, the word frustration was hitting on my head. Even trying to do , led me to craziness. I gave up on fanzines and decided to take the old-school route. Release wvdial and friends!

Trying different combinations for wvdial.conf was no heaven for sure, but I’ve found this wonderful from Vienna, Austria! that really made a difference. Of course he’s talking about the model MF180 but you get the idea. So I’m sharing what was different for the MF193.

Basically, I’ve done the eject and disable CD-ROM thing already, but still no progress. I’ve also tried using wvdial to send AT commands to the evasive /dev/ttyUSBX device. Starting from scratch confirmed that I’ve done such things properly indeed. I was amused by the fact that I could use screen to talk to the modem! (yo, all the time wasted trying to have minicom and friends play nice)

So, let’s get to the point. After following this procedure, you should be able to use NetworkManager to connect to the Interwebs using the 3G data service from Movistar Peru.

  1. Step 1 – follow the guide
  2. Step 2 – Here I had to use /dev/ttyUSB4
  3. Step 3 – follow the guide
  4. Unplug your USB modem
  5. Plug your USB modem. This time you should see only /dev/ttyUSB{0,1,2} and /dev/gsmmodem should be missing (not sure if this is a bug). Now /dev/ttyUSB2 is your guy.
  6. Step 4 – use /dev/ttyUSB2
  7. Run wvdial from CLI – it should connect successfully.
  8. Stop wvdial
  9. Select the Network icon on GNOME3, click on the Mobile Broadband configuration you have, if not create one.
  10. Voilá. Happy surfing!

I’m pasting my wvdial.conf, just in case.

[Dialer Defaults]
Modem = /dev/ttyUSB2
Username = movistar@datos
Password = movistar
APN = movistar.pe
Phone = *99#
Stupid Mode = 1
Init2 = AT+CGDCONT=4,"IP","movistar.pe"
]]>

14 February, 2014 02:16AM

January 30, 2014

What have we done?

Couple of weeks ago I was in the situation of having to setup up a new laptop. I decided to go with wheezy’s DVD installer. So far so good. I didn’t expect (somewhere in the lies-I-tell-myself dept.) to have GNOME as the default Debian desktop. However after install I’ve figured it was the new GNOME3 everybody was talking about. Before that I’ve seen it in Ubuntu systems used by my classmates. I thought yeah, OK GNOME3, I think that’s fine for them as a Linux desktop. Turns out that I’ve started using the Debian desktop aka GNOME3 and noticed that it was not as bloated as the Ubuntu desktop I’ve seen before looked, so I sticked with it, (I thought for a while).

Turns out that I did like this new so called GNOME3, the non-window based but an  application-based system (that is something that sticks in my head). I liked the  way it makes sense as a desktop system, like when you look for applications, documents, connect to networks, use pluggable devices or just configure stuff every time with less and less effort. Good practices and concepts learned from Mac OS X-like environments and for sure taking advantage of the new features the Linux kernel and user-space environment got over the years. So, like one month later I stick with it and it makes sense for me to keep it. I had no chance to try the latest XFCE or KDE, my default choices before this experience. Kudos GNOME team, even after the depictions you guys had on GNOME Shell; as I learned.

This whole situation got me into some pondering about the past of the Linux user experience and how we in the community lead people into joining. I remember that when I guy asked: how do I configure this new monitor/VGA card/network card/Etc? the answer was in the lines of: what is the exact chipset model and specific whole product code number that your device has? Putting myself in the shoes of such people or today’s people I’d say: what are you talking about? what it is a chipset? I mean, like it was too technical that only one guy with more than average knowledge could grasp. From a product perspective this is similar for a car manufacturer tell to a customer to look for the exact layout or design your car’s engine has, so that they are able to tell whether is the 82’s model A or 83’s model C. Simplicity on naming and identification was not in the mindset of most of us.

This is funny because as technology advances it also becomes more transparent to the customer. So, for instance, today’s kids can become really power users of any new technology as if they had, indeed, many pre-set chipsets in their brain. But when going into the details we had to grasp few years ago they have some hard time figuring out the complexity of the product that presents itself on this clean and simple interface. New times, interesting times. Always good to look back. No, I’m not getting old.

]]>

30 January, 2014 05:52PM

November 15, 2013

New GPG Key 65F80A4D

Update: I no longer use this key. Instead I’ve to a new key: D562EBBE.

A bit late but I’ve created a . I’ve published a as well. I hope I can meet fellow Debian developers soon to get my new key signed. So,  if you are in town (Arequipa, Peru) drop me an email!

 

]]>

15 November, 2013 03:06AM

August 10, 2013

Building non-service cartridges for RedHat Openshift

Cloud is passing the hype curve and we are seeing more stable developments and offerings on the market. Lately I’ve been playing with RedHat’s Openshift. Openshift is a PaaS (Platform as a Service) offering that intends to be an alternative for vendors such as Heroku. The focus for such offerings is to give developers enough flexibility and tools that handle the application deployment and maintenance process in a way that is integrated with their existing development workflow and tools.

I’ve been using Heroku for a while to deploy small to medium size projects. I liked the tools and developer-centered experience they offer. Openshift is quite new on the market and it comes in two flavors: , which is a PaaS service, and Openshift Enterprise, which allows organization to setup a PaaS within their own infrastructure. Both of them powered by the software. I’ll not compare Heroku vs. Openshift feature by feature but from my experience I can tell that Openshift is far from mature and will need to give developers more features to increase adoption.

When developing applications for Openshift developers are given a set of application stack components, similar to Heroku’s buildpacks. They call them cartridges. You can think of them as operating system packages, since the idea is the same: have a particular application stack component ready to be used by application developers. Most of the cartridges offered by the service are base components such as application server, programming language interpreters (Ruby, Python, etc), web development frameworks and data storage, such as relational and non-relational databases. Cartridges are installed inside a gear, which is a sort of application container (I believe it uses LXC internally). Unsurprisingly this Openshift component doesn’t leverage on existing packaging for RHEL 64bit, the OS that powers the service. I’d expect such things from the RedHat ecosystem.

I had to develop a cartridge to have a particular BI engine to be used as embed component by application developers. After reading I realized this can be piece of cake, since I have packaging experience. Wrong. Well, quite so. The tricky part for Openshift Online is that it does not offer enough information on the cartridge install process so you can see what’s going wrong. To be able to see more details on the process you’ll need to setup an Openshift origin server and use it as a testing facility. Turns out that having a Origin server to operate correctly is also a and consumed a lot of my time. Over the recent weeks I’ve learned from origin developers that such features are on the road map for the upcoming releases. That’s good news.

One of the challenges I had, and still have to figure out, is that unlike the normal cartridges mine didn’t required to launch any service. Since it is a BI engine I just needed to configure and deploy to an application server such as JBoss. Cartridge format requires to have a sort of service init.d script under bin along with setup, install and configuration scripts that are ran on install. Although every day I become more familiar with origin and Openshift internals I still have work to do. Nice thing is that I was already familiar with LXC and Ruby-based applications so I could figure where things are placed and where to look for errors on origin quite easily. The cartridges are on my if you care to take a look and offer advice.

]]>

10 August, 2013 03:15PM

March 14, 2013

Subversion auth using SSH external key-pair

Usually, when using Subversion’s SSH authentication facility, Subversion’s client will make use of your own SSH-generated key-pair and read it from the proper location, usually $HOME/.ssh. However, there could be situations when you’ll need to use a different key-pair. In such situations you can use a nice trick to have svn+ssh authentication work smoothly.

Let’s say you have an external key-pair, the public key is already configured on the Subversion server. You have the private key stored somewhere in your home directory. Now when issuing a svn checkout you’ll find that you will need some sort of SSH’s -i parameter to tell svn to use your external key-pair for authentication. Since there is not way to instruct Subversion’s client to do so, you’ll need to use a system environment variable.

Subversion makes your life easier by providing the $SVN_SSH environment variable. This variable allows you to put the ssh command and modifiers that fit your authentication needs. For our external key-pair use case, you can do something like:

export SVN_SSH="ssh -i </path/to/external-key>"

Now, next time you use Subversion svn+ssh authentication facility, the client will read $SVN_SSH and instance a ssh tunnel using the parameters you have defined. Once it has successfully authenticated you can use Subversion commands such as checkout, commit, etc in the same fashion you would normally do.

svn co svn+ssh://[email protected]/repo/for/software

Alternatives

Jeff Epler offered great advice with a more flexible approach using .ssh/config and key-pairs based on hostname.

Host svn.example.com
IdentityFile %d/.ssh/id_rsa-svn.example.com
Host svn2.coder.com
IdentityFile %d/.ssh/id_rsa-svn2.coder.com

]]>

14 March, 2013 01:12PM

December 26, 2012

s3tools – Simple Storage Service for non-Amazon providers

One of the nicest developments in the cloud arena is the increasing adoption of standards. This, of course, will impact on maturity and market confidence on such technologies.

Amazon, as one of the pioneers, made a good choice on their offering design by making their API implementation public. Now, vendors such as Eucalyptus private/hybrid cloud offering and many other providers can leverage and build upon the specs to offer compatible services removing the hassle for their customers to learn a new technology/tool.

I’ve bare-metal servers siting on data-center. A couple of months ago I’ve learned about their new cloud storage offering. Since I’m working a lot on cloud lately, I checked the service. It’s was nice to learn they are not re-inventing the wheel but instead implementing Amazon’s Simple Storage Service (S3) defacto standard for cloud storage.

Currently there are many S3-compatible tools available both FLOSS and freeware/closed source (). I’ve been using , which is already available in the Debian archive, to interact with S3-compatible services. Usage is pretty straightforward.

For my use case I intend to store copies of certain files on my provider’s S3-compatible service. Before being able to store files I’ll need to create buckets. If you are not very familiar with S3 terminology buckets can be seen as containers or folders (in the desktop paradigm).

First thing to do is configure your keys and credentials for accessing S3 from your provider. I do recommend to use the --configure option to create the $HOME/.s3cfg file because it will fill in all the available options for a standard S3 service, leaving you the work of just tweaking them based on your needs. You can go and create the file all by yourself if you prefer, of course.

$ sudo aptitude install s3cmd

$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key:
...

You’ll be required to enter the access key and the secret key. You’ll be asked for a encryption password (use only if you plan to use this feature). Finally, the software will test the configuration against Amazon’s service. Since this is not our case it will fail. Tell the configuration instead to not retry configuration and say Y to Save configuration.

Now, edit the $HOME/.s3cfg file and set the address for your private/third-party S3 provider. This is done here:

host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com

Change s3.amazonaws.com to your provider’s address and the host_bucket configuration also. In my case I had to use:

host_base = rs1.connectria.com
host_bucket = %(bucket)s.rs1.connectria.com

Now, save the file and test the service by listing the available buckets (of course there is none yet).

$ s3cmd ls

If you don’t get an error then the tool is properly configured. Now you can create buckets, put files, list, etc.

$ s3cmd mb s3://testbucket
Bucket 's3://testbucket/' created


$ s3cmd put testfile.txt s3://testbucket
testfile.txt -> s3://testbucket/testfile.txt [1 of 1]
8331 of 8331 100% in 1s 7.48 kB/s done

$ s3cmd ls s3://testbucket
2012-12-26 22:09 8331 s3://testbucket/testfile.txt
]]>

26 December, 2012 10:26PM

December 11, 2012

Puppet weird SSL error: SSL_read:: pkcs1 padding too short

While setting a puppet agent to talk to my puppetmaster I’ve got this weird SSL error:

Error: SSL_read:: pkcs1 padding too short

Debugging both on the agent and master sides didn’t offered much information.

On the master:

puppet master --no-daemonize --debug

On the agent:

puppet agent --test --debug

Although I had my master using 3.0.1 and the tested agent using 2.7, the problem didn’t look related to that. People @ #puppet also haven’t seen this error before.

I figured the problem reduced to an issue with openssl. So, I checked versions and there I got! agent’s openssl is using version 1.0.0j-1.43.amzn1 and master’s was using openssl-0.9.8b-10.el5_2.1 So I upgraded master’s openssl to openssl.i686 0:0.9.8e-22.el5_8.4 and voilá, the problem is gone.

I learned that there has been a in OpenSSL’s VCS that is apparently related to the issue. Hope this helps if you got into the described situation.

]]>

11 December, 2012 11:30AM

November 14, 2012

Function testing using function pointers in C++

I do like the way programmers think in terms of DRY, for instance, among other forms of optimization. A couple of days ago I wanted to test different implementations of the same algorithm in C++.

The usual approach would be to implement those algorithms, call them in main() and see the results. A more interesting approach would be to write a validator for each function and print whether the result is correct or not.

Since both are boring and break the rule of DRY, I’ve decided to go complicated but productive, so I’ve implemented a function testing method using function pointers that will allow me to write a single function and use it to test the algorithms using different input and expected results.

Let’s say I have 3 algorithms that implement Modular Exponentiation.

uint64 modexp1new(const uint64 &a, const uint64 &_p, const uint64 &n){
 ...
}

uint64 modexp2(const uint64 &_a, const uint64 &_p, const uint64 &_n){
 ...
} 

uint64 modexp3(const uint64 &a, const uint64 &p, const uint64 &n){
 ...
}

If you are not familiar with the syntax, const means that the parameter is constant, thus it will not be modified inside the function. The & means that the parameter is passed as a reference, read the memory address of the calling variable, so we don’t overload fill the RAM with new variables new copies of the variables.

Now, the testing part. For my very own purposes I just want to test those particular functions and know whether if the result they output conforms what I do expect. Some people call this unit testing.
I do also want to test it in different scenarios, meaning input and output.

So, I’m creating a function that will get the three parameters required to call the functions, and a fourth parameter that is the value I do expect to be the correct answer.

Now, since I don’t want to repeat myself writing a call for each case, I’m going to create a function pointer that is an array and has the function’s address as their value. That way I can call them in a loop, and voila! we are done.

Finally, after calling the function I check the result with the expected value and print an OK or FAIL.

Tricky part for this could be understanding the function pointer. Two things to consider: the returning value from the referenced functions has to be the same for all. Second: the input definition for each function has to be similar too. This is important because function pointers are just pointers and they need to know the size of the data type in order to navigate the memory.

For this sample code the uint64 is a typedef for long long, of course. Full code is below.

void testAlgos(const uint64 &a, const uint64 &e, const uint64 &n, const uint64 &r){
  uint64 (*fP[3])(const uint64&, const uint64&, const uint64&) = { &modexp1new, &modexp2, &modexp3 };
  uint64 t;

  for(int i=0; i<3; i++){
  t = fP[i](a, e, n);
  std::cout < < "F(" << i << ") " << t << "\t";
  if (t == r){
    std::cout << "OK";
  } else {
    std::cout << "FAIL";
  }
    std::cout << std::endl;
  }
}

Now that I have this, I can use it in main this way.

int main(){

  uint64 a = 3740332;
  uint64 e = 44383;
  uint64 n = 3130164467;
  uint64 r = 1976425102;
  testAlgos(a, e, n, r);

  a = 404137263;
  r = 2520752541;
  testAlgos(a, e, n, r);

  a = 21;
  e = 3;
  n = 1003;
  r = 234;
  testAlgos(a, e, n, r);

  return 0;
}

Resulting in:

F(0) 657519405 FAIL
F(1) 1976425102 OK
F(2) 657519405 FAIL
F(0) -2752082808 FAIL
F(1) 2520752541 OK
F(2) -2752082808 FAIL
F(0) 234 OK
F(1) 234 OK
F(2) 234 OK 

I’m happy about not having to write tests like this for all use cases:

std::cout < < "f1: " << modexp1new(a, e, n) << std::endl;
std::cout << "f2: " << modexp2(a, e, n) << std::endl;
std::cout << "f3: " << modexp3(a, e, n) << std::endl; 

Now, happiness can be more exciting if I do use templates so I can test any function independent of the data type for the returning and input values. You know, more abstraction. Homework if time allows! Have fun!

]]>

14 November, 2012 03:55AM