Search

For the moment I will be using this thread.

  • Bug: The file link in the entries table in wrong (while it is right on the entry edit page)
  • Suggestion: It would be great to be able to build SSL URLs as well (S3 and Cloudfront will allow this). One option would be to have a flag "use SSL" (which is maybe too much), another option would be to allow a CNAME value starting with https:// (which is not a valid CNAME anymore, I know...)
  • Suggestion: If one is using Cloudfront, the overall cost will also depend on the Cache Control Header. This might be an "expert option" in the config file only. But this only makes sense if you have the additional option to use unique filenames, an option which should probably be implemented in the section editor (settings panel). As proposed in this discussion, I would vote for using the uniqid() function (which I will probably use in the next version of the Unique Upload Field). (The uniqueness option will be important as soon as you use Amazon's Cloudfront, because Cloudfront will cache your files.)

Bug: The file link in the entries table in wrong (while it is right on the entry edit page)the entry edit page).

There was no prepareTableValue function, that's an easy one.

Suggestion: It would be great to be able to build SSL URLs as well (S3 and Cloudfront will allow this). One option would be to have a flag "use SSL" (which is maybe too much), another option would be to allow a CNAME value starting with https:// (which is not a valid CNAME anymore, I know...)

It would make more sense to have an SSL flag, but would this make more sense if it were Section-specific or entry-specific? It seems like it'd make more sense if it was section-specific.

And re: the last suggestion, making the files uniq is easy enough with that function, this could be a section-wide option as well.

Where does amazon except the Cache-Control header to be set? Is it on a per file basis so it would just be part of the array where you set Content-Type?

Cool! The section-wide setting for SSL is the better solution. In this case you might arrange those checkboxes in groups (i.e. columns).

Where does amazon except the Cache-Control header to be set?

Yes, it's on a per-file basis, and AFAIK it can be done in the same parameter array. I'll be happy to do some testing. (Have I mentioned that I am rather good in breaking things?)

@scottkf I pulled the update. Thank you. I tried to upload to the S3 account with no luck. It's still doing the same thing.

Here are my settings for the S3 Upload field in my Audio section:

alt text When I try and upload it clears out the title and mp3 file I selected to upload and gives no error information.

I'll checkout the log and get back to you.

Attachments:
Screen shot 2011-05-31 at 1.30.35 PM.png

There is some information on this in the Cloudfront FAQ:

http://aws.amazon.com/en/cloudfront/faqs/

[EDIT]: Sorry, I was referring to the post before yours... You just happened to save faster.

I cleared the logs/main.txt and attempted another upload to the S3 bucket. The file did not upload and there is no entry in the log file.

There is some information on this in the Cloudfront FAQ:

@michael-e, is your comment directed to me or @scottkf?

Sorry, it was meant as a response to @scottkf.

Regarding your problem: I am not sure, but maybe the S3 class which is used requires OpenSSL in PHP. Have cou checked if this is enabled in your phpinfo()?

Regarding your problem: I am not sure, but maybe the S3 class which is used requires OpenSSL in PHP. Have cou checked if this is enabled in your phpinfo()?

Checking right now. Thanks, @michael-e

@michael-e, here's our OpenSSL info, does that look right to you?

alt text

Attachments:
Screen shot 2011-05-31 at 1.50.04 PM.png

Yeah, that looks perfectly right. So I was on the wrong track.

Just to make sure that it's no server-related issue: Have you tried the field on a different server?

Alright, I added your suggestions and fixed that bug, grab from the master. The cache control thing I only put in Preferences at the moment for testing, but eventually you could set it per file I suppose. And it must be in seconds for now.

  • Added SSL option to the section
  • Added Unique File option to the section
  • Added Cache Control option in preferences, needs to be in seconds, defaults to 10 days

Perfect work, thank you! Just one mini-thing: Could we append the unique ID to the filename using a standard minus (-) character? (That's the way the Unique File Upload extension does it.)

I will test how Cloudfront reacts on the Cache Control header tomorrow. I have to shut down for today.

FYI the issue was a server setting on my end. The extension is working fine now.

@scottkf: What do you think about implementing the same "unique name" logic which is used in the Unique Upload Field extension?

private function getUniqueFilename(&$file) {
    ## since uniqid() is 13 bytes, the unique filename will be limited to ($crop+1+13) characters;
    $crop  = '30';
    return preg_replace("/(.*)(.[^.]+)/e", "substr('$1', 0, $crop).'-'.uniqid().'$2'", $file);
}

I did some research on CloudFront usage.

Amazon CloudFront's default expiry time is 24 hours. It will, however, honour the Cache-Control header of Amazon S3 files if it exists (i.e. has been set upon uploading the object to S3). But the minimum lifetime of a file will be one hour. A Symphony user would be rather irritated if removing a file or deleting a complete entry would not remove the file from the web. The only way to actively delete a file from a CloudFront distribution is to send an invalidation request to CloudFront.

But I think that implementing such advanced CloudFront usage is far beyond the scope of the S3 Upload Filed extension. And I don't think that there is a strong need for this anyway.

Nevertheless being able to set the Cache-Control header (on a per-field basis) would be a nice feature, but it should be possible to omit the header (i.e. send the object to S3 without this header) by posting an empty value in the preferences. (That is not possible at the moment.)

I think that the most common use case for this extension is saving static images (like gallery images) to S3. Unfortunately the current JIT extension is unable use caching for external files. It JIT could do this (and I have an idea how to achieve it), there was no need to use any fast CDN. Just pushing your images to S3 would be fine, since the download speed would only matter for the first JIT process. JIT caching would also reduce the costs of S3 hosting a lot; as long as you don't need the full-sized original image (i.e. you serve thumbnails and medium-sized images through JIT), there would be virtually no S3 traffic costs (because one might only send header requests to S3).

I will do some more tests on this. Maybe I will send a pull request for JIT (if my idea works).

[EDIT]: It should be noted that the above scenario would not need a Cache-Control header. It would only use the Date header of a file and leave the caching to JIT.

Sure, I can use that logic to generate the filename! I doubt you'll have ever have a problem with collisions but it can't hurt.

And I'll make it so you can have an empty value in the preferences as well. Let me know how it goes with JIT integration!

@scottkf and michael-e

Thanks for picking up on this extension. I must admit I can't truly actively maintain this extension right now, so please just let me know if you'd like me to move the extension download link to your fork. Otherwise just send a pull request when you've got your additions in and I'll be glad to update the current linked repo. Thanks again!

Just a short update for anybody who is interested:

Using the S3 Upload extension without the unique filename option may bring surprising results. :-)

The reason behind this is that putting an object to S3 will overwrite whatever object has been there with the same key. I found a good expalantion on a page dealing with the AWS SDK for Java:

If versioning is enabled for the specified bucket, this operation will never overwrite an existing object at the same key, but instead will keep the existing object around as an older version until that version is explicitly deleted ...

If versioning is suspended or off, uploading an object to an existing key will overwrite the existing object because Amazon S3 stores the last write request. However, Amazon S3 is a distributed system. If Amazon S3 receives multiple write requests for the same object nearly simultaneously, all of the objects might be stored, even though only one wins in the end. Amazon S3 does not provide object locking; if you need this, make sure to build it into your application layer.

Now this is the intended behaviour of S3. Some client software (e.g. FTP programs which do S3 as well) nevertheless try to emulate the behaviour of an operating system, i.e. warning the user and asking if the file should be overwritten.

Should we implement this behaviour in this extension? Or should we simply say "you better use unique filenames" (which I will do anyway, because I intend to use it in a multi-user environment).

It creates unique filenames by default, but perhaps it could be better explained that in a multi-user environment you must absolutely use it. I don't think it's worth implementing to be honest, as long as it's documented. Using unique filenames doesn't really hurt anyone, except for those finicky people who love their url's clean.

I don't think it's worth implementing to be honest

I agree! I just wanted people to know this.

Create an account or sign in to comment.

Symphony • Open Source XSLT CMS

Server Requirements

  • PHP 5.3-5.6 or 7.0-7.3
  • PHP's LibXML module, with the XSLT extension enabled (--with-xsl)
  • MySQL 5.5 or above
  • An Apache or Litespeed webserver
  • Apache's mod_rewrite module or equivalent

Compatible Hosts

Sign in

Login details